TABB has an interesting article on MIFID II – “MIFID II: It’s Coming. Will You Be Ready?”. I would agree with the article that data fragmentation is going to be an issue. I suspect many companies will need to undertake some effort to resolve historical data linkage issues. KYC will probably be one of the areas that companies needs to resolve data issues in based on RegTechFS article – think data in CRM system (e.g. salesforce) coupled with various other internal databases that all need to be linked together to effectively offer an audit trail of customer interactions.
A few ongoing themes around the square mile:
- Capital, and its management – impacted by Basel III
- Lack of Credit in the corporate bond markets space. Which leads to Algomi platform. Which possibly leads to more big data internal bias solution mining a banks own data lake of client data (from numerous sources including out in the field sales people) to match buyers and sellers.
- “Unit testing is not enough. Test the interactions between your subsystems.” Could not agree with this more. Numerous projects obsess with unit test coverage, and forget boundary testing between subsystems. Likewise, how many systems fail to test the expected number of payloads sent over the wire?
- “Automate Deployments And Config Management – With Extreme Care” There is no free lunch!
- “We roll out deployments facility by facility, from lowest traffic to highest. Within a facility, we roll out server by server, and even core by core, running a comprehensive functional testing suite at every step of the way. ” Big bang deployments have implications. NSONE deployment sounds very similar to the
- “Simulate the bad things before they happen. Netflix’s Chaos Monkey is one well-known example” – how many times have you heard IT propose no concept of failover/DR because they believe their isn’t budget, only to be burnt a time period later?
- “Lock your systems down and minimize the attack surface exposed to the internet.” – security 101 in my view. Likewise, “Each role in your architecture should expose the services it provides only to the set of systems that need to access those services”
Lykke appears to be taking a different blockchain path to startup Digital Asset Holdings – Lykke is going for colored coins, and the public blockchain that exists today. Lykke has a nice white paper on its reasoning for colored coins, and presents a high level architecture highlighting the exchange usage:
Traders create an order by creating and signing a transaction to send x coins to the exchange, whereas x is the amount and type of coins they intend to sell. Unlike usual transactions, this transaction is not sent to the Bitcoin network, but to the exchange instead, along with additional information about the order (type, asset to buy, limit, etc.). As soon as the exchange receives a matching order containing a second transaction, the exchange creates a third transaction that sends the exchanged amounts to the two traders. These three transactions together form a trade and are sent to the Bitcoin network for execution. The third transaction is also sent to the two traders, so they can immediately reuse the proceeds for subsequent transactions. Unfilled or cancelled orders are simply discarded.
In some ways the exchange is similar to the Ripple Gateways.
Don’t optimize your code, optimize your architecture.
If you’re pushing latency sensitive critical command and control messages to one or more facilities, you’ll probably want to look at robust message queueing systems
test the interactions between your subsystems
If your interested in smart contracts, its probably worth having a read of the Greeter tutorial over at Ethereum. Likewise, Writing Contacts on their wiki. There’s also the online Solidity realtime compiler and runtime which is useful. Interesting times.