Worth a read. Dev/Prod Parity is one topic that I see firms always trying to break, with little understanding of the impact of their decisions
I read the story of the black team in some book a long time ago. My view is that engineers should fear the testing team, and thus provide further emphasis on the important to BDD/TDD and such in the development cycle. All too often the QA/Testing team in an organisation is disconnected from the engineering team, with the net effect that engineering becomes sloppy about what they throw over the “wall”.
Interesting read over on Traders Magazine around how Morgan Stanley has upgraded its equities infrastructure – Morgan Stanley Cuts Microseconds from Trading Systems. I’m curious what specifically is meant by “real-time learning algorithms” within their new smart router.
Also, who were the “enterprise infrastructure specialists”? I also guessing Solarflare, co-lo, possibly exegy market data, and RoCE at a min
Continuing on from the previous posting. If one decided to annotate the PlantUML flows with max latency data one could then possibly consider leveraging Application Tap for Solarflare from a cucumber test perspective to capture the hop latency, and validate the flows via the data capture database. Anyone tried such a thing?
Every now and again I come across a team that is at sea with agile. These teams have decided to throw out the concept of business requirements and anything else that is deemed old world, and moved to an agile backlog that fails to capture the business requirements in any shape or form The backlog is effectively orthogonality to any sensible good practice Scrum backlog. Business Analysis Times captures this quite nicely with the following quote:
Story is the smallest valuable business requirement that follows the INVEST attributes
Business requirements, use cases, sequence diagrams, UML etc may all be old world, but at the end of the day team need to understand that the basic principles of software engineering still standing, and the past 20+ years has generated some very valid principles that should still be used today.
Writing distributed applications is complex. Testing distributed application is as complex. Throw in a distributed applications that spans a LAN/WAN with latency implications, and the software engineering/testing complexities become quite painful to model.
Cucumber in my view is nice from the perspective that is allows the tests to be written in pain text, and in a business DSL. Sometime ago whilst writing a distributed application, and trying to code the cucumber tests, I realised that I could benefit from the sequence diagrams (PlantUML) that I had draw to allow me to visualise the message flow between the various interested parties (nodes) in my Proof of Concept (PoC).
My idea was to get Cucumber to include the sequence diagram as part of the “Then” clause to aid in validating the messages that should have flowed between nodes. This simple and almost obvious idea, leads to the following Cucumber scenario (leveraging Eugene’s sample):
Scenario: Add two limit orders to the SELL order book, with more aggressive order first When the following orders are added to the "Sell" book: | Broker | Qty | Price | | A | 100 | 10.6 | | B | 100 | 10.7 | Then the "Sell" order book looks like: | Broker | Qty | Price | | A | 100 | 10.6 | | B | 100 | 10.7 | And the Message flow looks like: @startuml actor BrokerA actor BrokerB boundary MatchingEngine BrokerA --> MatchingEngine: LimitOrder BrokerB --> MatchingEngine: LimitOrder @enduml
As you can see, the Cucumber scenario effectively allow me to validate the completion state and also the messages passed to create the state. For particular types of distributed application that build on Finite-State Machines (FSM) in a globally distributed application this scenario allows a certain level of confidence based not only on state but also the messages passed on the WAN/LAN to construct the state at each node.