|
Mission-critical systems are not software like any other. Most software may fail. Some even incorporate failure or inaccuracy as a standard mode of operation, such as the overbooking systems of transport companies. For years, TGV passengers have practiced the fantasies of overbooking without it bothering them too much. It is usual for a certain online sales or music site to be unavailable for a few seconds or even minutes, it does not bother anyone. If a payroll application makes late transfers, who will be sorry about it, except the unfortunate employees? The situation is quite different for mission-critical systems. They cannot be unavailable or even provide a degraded service. Any failure will be detrimental to the company that provides the service, to its customers and to the operator's supplier. Any failure will cause damage in terms of image, business, market share or sometimes much more. For mission-critical systems you must always be thinking in worst case scenario and not best case scenario. This is why high availability is at the heart of the design and implementation of these systems. To be achieved, it requires not only solid skills from software providers but also a culture shared with operators that can be banking industry or others. Rather than dealing with countless individual cases, it may seem useful to ask what are the principles on which a high availability system should be based, the foundations of its principles and the reasons for their effectiveness. Working on this list will make it possible to simply discriminate between architectures that can provide high availability and those that need to be eliminated and to quickly reject baroque or mannerist architectural creations by neo-experts. I see seven of them and I think that with these simple principles, we can work seriously: A SPOF, a Single Point Of Failure, is a single piece of software or hardware architecture whose failure will lead to the failure of the entire system. Under Murphy's law, all SPOF will eventually fail! The mission-critical system vendor must therefore ruthlessly eliminate all possible SPOFs. There is no need to go further, and it is enough to eliminate single databases even managed by a large Cloud specialist, even opaquely distributed on multiple machines, even guaranteed by a cryptic service level agreement, this single database is a SPOF and vulgarly, a nest of annoyances. An object is said to be symmetrical if it can be superimposed on itself by the application of a transformation of space other than identity. Behind all symmetry is a property of invariance. In our field, it is a question of distributing service requests on subsystems that can be substituted for each other, the idea being that the service will be provided in the same way regardless of the subsystem that carried it out. If one disappears, it doesn't matter for the quality of service. Technically speaking, the N subsystems need to have the same information to make a decision, which means they need to inform each other about what they've done. Symmetry can be mirror (normal/dual, Active/active), trial (normal/dual/trial, Active/active/active), quadral, etc. Theoretically there is no limit, except common sense: an Active/Passive system is not a symmetrical system. There will always be a time when the passive becomes active and this transition will not be without "friction", with friction often being an untested procedure or an individual, which will endanger the quality of service. And here again, let us remember this good Murphy and his law. Of course, symmetry, the substitutability of any subsystem by another, is also much simpler than non-symmetry. This choice of simplicity is based on another general principle that is in line with the same interest: KISS, as in Keep It Simple and Stupid. "Active/Active" contrary to what some people may believe, is much simpler than Active/Passive. The invariance of operation over time is the fact that the system will work in the same way, in the same mode, always. It will work like a perpetual calendar watch. There will not be 30 days of operation in one mode A and two hours of operation in another mode B and back. For the same reason as before, going from mode A to mode B and back will not be without friction, which will inevitably lead to quality-of-service problems. The invariance of the functioning in time is obviously a type of symmetry (translational symmetry in time). Performance predictability is the ability to predict the performance of a system with a margin of error small enough not to lead to adverse consequences. It is essential for supervising systems, for detecting damage, for anticipating the actions to be taken before failures occur. Here again, it is a problem of invariance and therefore of symmetry. Using multi-AZ architecture (where services are spread over several different geographical areas) means that the path of a transaction will not necessarily be the same from one time to the next, fast at one time, slow at another, the speed of light being non-negotiable. For the same reasons, shared resources must be avoided, even for things as simple as a Local network connection. Mission critical systems can’t share resources, otherwise the response time is not predictable. Losing the predictability of performance means losing the basics of its SLA and an essential means of detecting potential failures. Operation must be as simple as possible. A nuclear submarine carries a miniaturized nuclear power plant that provides the electricity that runs its engines and allows it to produce its oxygen. It is a very complex technology, developed by remarkable engineers. However, it is operated by normal people, by well-thought-out computer systems and by procedures books. If you had to take high-level engineers on board to operate nuclear submarines, they would all have been at the bottom of the water for a long time. No engineer of great talent will agree to spend six months underwater in a confined space, at least we can consider that there will not be enough candidates. Mission-critical systems are the same thing, they must be designed and set up by people of a certain talent but operated by normal people, without the need to resort to the former, who will never be there, according to Murphy's law, when they are needed. Implementing this principle is far from straightforward, but it is an essential guide. The Build phase should produce an artifact that will make the Run phase simple. The fewest different software or technology providers. Behind the management of a mission-critical system, there is a management of responsibility. If the system is produced by a chain of different technologies, there will be not only the problem of coordination and the strength of the chain as defined by its weakest link, but also the problem of responsibility. This principle is less strong than the previous ones and is a pragmatic application aimed at eliminating unnecessary links and keeping the technical architecture and the structure of responsibilities as simple as possible, it is also an application of the KISS principle explained above. It is by virtue of this principle that we prefer the cross-notification of information rather than the synchronization of databases themselves (and no, it's not the same thing) which moreover makes it easy to make multipolar systems (Dual, trial, "quadral", etc...). Security and securitization is now also a topic for high availability as the number of hackers’ assaults is regularly increasing. Although the main objective of these assaults is not related to service availability, they are, however, presenting a threat to it. Therefore, all connections and access must be properly secured by certificates, double authentication and so on, bearing in mind that, if a security checking can protect a system against attack, it can also be a threat to its availability. Changing a certificate for instance must be properly planned and coordinated otherwise it will cause an outage. As can be seen, however independent, these seven principles dialogue with each other and reinforce each other. Analyzing architecture through these saves time and above all, avoids disappointments. Now, after "On-demand" fashion and its underlying religious conviction that it was no longer necessary for the developer to worry about the performance of his software, after the micro-service and the belief in the harmlessness of virtualization, which ignored the fact that all micro-services were always carried out somewhere and that the speed of light remained constant, it is the Cloud idol that combines the two with an irrational overconfidence in infrastructure and brings a bunch of nonsense rarely equaled. In these turbulent waters, having a few principles validated by a long and successful experience is not useless. Pioneering Security with Exclusive Fraud Prevention Technology on the HPE NonStop Platform1/16/2024
In the dynamic landscape of digital transactions, the imperative to secure financial data has reached unprecedented heights. Lusis Payments, a frontrunner in payment software solutions, stands out as a pioneer in fraud prevention, particularly on the HPE NonStop platform. This article explores the intricate details of fraud prevention on HPE NonStop, highlighting the innovative features offered exclusively by Lusis Payments and shedding light on the unique benefits of operating on the HPE NonStop infrastructure. Fraud Prevention on the HPE NonStop Platform: The HPE NonStop platform, renowned for its reliability, scalability, and fault-tolerance, has become the platform of choice for mission-critical applications in the financial sector. Lusis Payments has strategically harnessed the strengths of this platform to develop fraud prevention solutions that seamlessly integrate with its unique architecture.
Benefits of Being on the HPE NonStop Platform:
Exclusivity with Lusis Payments: Lusis Payments proudly stands as the exclusive provider of payments fraud prevention technology on the HPE NonStop platform. With a singular focus on delivering top-notch security solutions tailored for this unique environment, Lusis Payments has solidified its position as the go-to partner for financial institutions seeking unparalleled protection against fraud. Collaboration with Lusis AI: In a testament to its commitment to innovation, Lusis Payments collaborates closely with Lusis AI, its dedicated artificial intelligence division. This collaboration is instrumental in enhancing the efficacy of fraud prevention solutions. Lusis AI's expertise in developing intelligent algorithms and predictive models contributes significantly to strengthening Lusis Payments' ability to stay ahead of evolving fraud landscapes. Additionally, it's worth noting that Lusis Payments employs the BackTest Engine (BTE), also known as the sandbox for testing. This tool ensures rigorous testing of rules before they are moved into a repository for production, effectively utilizing a fully integrated approach with TANGO for acquiring and issuing. Conclusion: As financial transactions continue to evolve, Lusis Payments and the HPE NonStop platform stand as steadfast guardians, ensuring that the future of digital payments remains secure and resilient. The combined forces of Lusis Payments' cutting-edge fraud prevention software, tailored exclusively for the HPE NonStop environment, and the inherent benefits of operating on this platform create a formidable defense against the ever-present threat of fraud. The seamless integration, fault-tolerant architecture, and scalability of both entities contribute to a secure and efficient environment for financial institutions and businesses. In this collaborative pursuit of security and innovation, Lusis Payments, as the exclusive provider of fraud prevention technology on HPE NonStop, redefines the landscape of digital transactions, setting new standards for the intersection of technology, security, and financial integrity. When considering the world of data management and databases, one may not immediately associate it with excitement. However, the realm of HPE NonStop SQL/MX offers a unique perspective that can pique the interest of buyers and technology enthusiasts alike. This article delves into the importance of NonStop SQL/MX and its relevance in today's rapidly evolving computing landscape. The Symbiotic Relationship Between Software and Hardware In the realm of computing, the relationship between software and hardware has always been of great significance. This synergy can profoundly impact the performance, reliability, and scalability of applications. NonStop SQL/MX embodies this synergy by providing a database solution optimized to work seamlessly with HPE's NonStop hardware platform. Just as a finely crafted beer requires the right container to preserve its quality, data deserves a database technology that ensures its integrity, availability, and performance. A Journey Through Computing History To appreciate the evolution of NonStop SQL/MX, it's worth taking a brief journey through computing history. From the early days of personal computers, where enthusiasts used Apple IIe computers for various tasks, to more complex endeavors involving minicomputers like the HP 3000, our experiences reflect the ever-changing landscape of computing. Fast forward to the early 80;s ', where we encountered the NonStop platform and Enscribe for the first time. At that time, “Tandem” or "NonStop" primarily referred to the hardware platform. Today, it has evolved into a software solution that adapts to the cloud-centric computing world. The Cloud-Centric World and Abstracted Applications In today's cloud-centric environment, applications are abstracted from the underlying hardware and operating systems. Tools like Docker and Kubernetes enable elastic scaling, making it easy to provision computing resources on-demand. The complexities of the infrastructure are hidden behind user-friendly interfaces, and users focus on the benefits of the applications. If we need more computing resource, then we just load up the Azure portal and configure a virtual machine with the exact specifications that we need. Just by clicking check boxes.
The Future of NonStop: A Checkbox on the Horizon? In this modern landscape, one might wonder if we will ever see a "NonStop" checkbox option in cloud platforms. While we don’t have a crystal ball, the potential benefits of NonStop fundamentals being more accessible are clear. Businesses can leverage this technology for applications requiring high availability, scalability, and fault tolerance. The True Challenge: Data Management However, the selection of technical infrastructure is only part of the equation. The true challenge lies in data management. The advent of Data Sovereignty, GDPR, and Cyber Terrorism are just some of the latest challenges impacting data management policies and governance. Speaking frankly, the mountain of regulation surrounding data management is extremely intimidating. With hindsight, the checkbox that users may really like to see on an Azure portal is “Access to helpful data management expert?”. HPE GreenLake: A Differential Benefit It becomes evident that HPE GreenLake offers a differential benefit compared to commodity cloud service providers. While giants like Amazon, Microsoft, and Google provide powerful cloud solutions, HPE's partnership with industry vertical leaders, such as Lusis Payments for Retail Payments, sets it apart. This partnership extends HPE's multi-decade excellence as the custodian of the NonStop spirit into the realm of cloud computing. Click. In Closing The journey through computing history underscores the importance of databases like NonStop SQL/MX in modern computing. While the checkbox for NonStop on a cloud portal may be close, it is not here today, the value of data management expertise and strategic partnerships cannot be overstated. HPE's legacy in the world of NonStop computing and its innovative GreenLake offering hold the promise of exciting years ahead in the ever-evolving world of technology. 2021 has been another tough year for Covid-19 related challenges. Our thoughts continue for those who have been impacted by the virus especially those who are sick, we extend our heartfelt wishes for a full recovery.
We would also like to take this opportunity to thank all of our clients, partners and teammates for another extraordinary year. The dramatic surge in demand for electronic payments continues from 2020 and the Lusis Payments delivery team has been hard at work helping customers grow their systems and businesses. Lusis completed a record number of projects in 2021 including:
This summer, HPE selected TANGO as their preferred GreenLake financial payments solution. You can watch the HPE Discover interview with Lusis Payments CEO, Philippe Preval and Keith White, GM of HPE GreenLake cloud solutions here. “HPE is delighted to partner with Lusis Payments and TANGO as the premier Retail Payments solution for future generations.”- Keith White. Also this year, Lusis and HPE successfully demonstrated TANGO running at 4500 TPS on a HPE NS server the ATC – proving that TANGO is still the most performant and cost effective payments solution for HPE NonStop. To read more details about the test read the blog here. Our product investments have continued in earnest, ensuring that TANGO remains at the forefront of FinTech payments. Some highlights include:
We are looking forward to the exciting adventures we will share in 2022 and would like to wish all of our customers, partners and staff a Happy and Prosperous New Year!! In 2012, Lusis Payments conducted a historic proof of concept with partner HPE at the HP ATC (advanced technical center) in Palo Alto, CA. TANGO was tested for 48 hours straight at full capacity. The system processed 2,500 TPS without fail. The hardware configuration used at the time of the benchmark was chosen to match a client’s production system and consisted of a 8-processor HPE J-series NonStop. TANGO proved responsive and surpassed normal daily tasks and nightly settlements. This proof of concept proved that TANGO was fault tolerant and achieved maximum volumes and throughput of a total daily volume of 50 million transactions per day.
The outstanding results came from long hard-working sessions with the HPE teams which we were proud to work with. The first week included our CTO working on-site. Soon after, he was joined by our senior project leaders, and they received significant additional support from our lab in Paris. In addition to the dedicated HPE team, the client’s team also partnered with us to test the conformity of the benchmark protocol. HPE worked with the client to reproduce its environment for a true simulation. It was great project, and we were proud of the outcome. Since then, HPE has continued to suggest that we test TANGO on the newest (Intel based chip) hardware. As we were still quite pleased with the 2500TPS results and the fact that the client continued to realize improved performance on their HPE NonStop platform, we chose not to do additional test campaigns in subsequent years. Until now. At the end of Q1 we said “ok, let’s do it”! At that time, bandwidth was quite low, so we made it “our dry way of doing it.” We used our Vanilla switch based on the TANGO version 7 platform installed on HPE NS server: 8 processor, 6 core NS7-X3 system again at the HP ATC labs. This system runs OS release L21.06.17.2 with NonStop SQL/MX 3.7.2. Each NonStop processor contained 256 GB of memory. We used a very similar testing protocol without any specific tunning. And “Torpedo… LOS”! On our first run we achieved 3,500 TPS. Then with less than 10 hours of tuning, we easily reached 4,500 TPS sustained for two straight hours. So, this has become our new reference on HPE NonStop: 4,500 transactions per second on an 8-CPU machine. And this was simply done with our Vanilla switch and some very light tuning. So, nothing heroic, just the standard product using a standard configuration. In Q3, we will test TANGO with the new Posix Kernel of HPE NonStop and see where we take it! Stay tuned. Philippe Préval Lusis CEO Chez Lusis |
lUSIS nEWSThe latest company and industry news from Lusis Payments. Archives
December 2025
Categories
All
|

RSS Feed