This service covers the area of providing mainframe migration strategy, the correct architecture to be adopted, and the tools and phased approach to be followed. This service will be offered prior to the actual migration.
This service will involve the analysis of the existing mainframe infrastructure, such as the version of the mainframe, the workload, the volume of COBOL and PL/I code, JCL jobs, database dependencies, and application integration. This will be documented in a written format.
In this migration strategy, we migrate COBOL, PL/I, and Assembler applications to newer runtimes using automated conversion, manual recoding of business-critical applications, and replacing mainframe-based APIs. The output is validated against the original system before cut over.
In this strategy, we migrate mainframe-based applications to AWS, Azure, or Google Cloud. The choice of cloud architecture is based on the complexity, coupling, and business criticality of the application portfolio.
In this strategy, we migrate RPG, CL, and COBOL applications running on IBM AS/400 and IBM iSeries platforms. The migration involves code conversion, DB2 for i database migration, replacement of the job scheduler, and integration rewiring.
In this strategy, we migrate COBOL and FORTRAN applications running on the Unisys MCP and OS 2200 platforms. The migration involves file systems, transaction processing, and job control languages specific to the platforms.
We move VSAM data sets, IMS hierarchical data bases, and DB2 data bases to modern relational and non-relational data stores, including PostgreSQL, Microsoft SQL Server, Amazon Aurora, etc.
The batch jobs are migrated to Apache Airflow, AWS Batch, Azure Batch, and Kubernetes-based batch schedulers. The logic is migrated and implemented on the target platform.
The mainframe applications are modernized step by step by wrapping the applications around RESTful APIs, migrating specific batch jobs to distributed platforms, and transforming batch jobs into event-driven jobs.
The monolithic COBOL applications are re-architected into a microservices-based application, and the VSAM and IMS databases are replaced with cloud-based databases, and event-driven communication is established among the migrated microservices.
The validation is performed for functional equivalence between the source and target systems, and output comparison tests, load tests, and regression testing are performed for each cutover phase. The validation documents are generated for regulated industries.
We monitor the migrated system, resolve performance regressions, and provide ongoing DBA services — routine maintenance, backup verification, replication monitoring, and on-call incident support — on a retainer basis.
The business logic is contained within large, undocumented COBOL applications with no test strategy. Programmers are reluctant to make changes because the side effects are too unpredictable. It is taking too long to make bug fixes, especially for the size of the changes.
The IBM pricing model is based on MIPS, and this means that software costs are increasing while the workload remains the same, especially if the hardware is upgraded. Organizations are at risk if they are on a sub-capacity pricing model and the workload is changing.
The business logic, batch runs, and data flows are only known to a small team of people, and these are individuals who have been around for a while. There is no documentation that explains the system as it is currently functioning.
Each new system that is supposed to integrate with the mainframe system requires a custom integration layer. The cost and time required to make these integrations are slowing down the adoption of new tools for the rest of the organization.
The traditional batch processing window, designed to perform operations over the course of the night, is no longer appropriate given the need for the data to be available in near-real-time. The architecture cannot be easily changed without re-working the batch layer.
Manual data export from mainframe data stores, whether from VSAM data stores or from an IMS database, is necessary because the data stores cannot be accessed programmatically by modern BI and reporting tools.
Here, the mainframe workload is migrated to a cloud or distributed infrastructure without making any changes to the application code and using mainframe emulation software to run existing COBOL and PL/I code on commodity infrastructure.
Here, the existing code of the mainframe application is rewritten and restructured according to the existing patterns of the modern architecture, and the existing procedural COBOL code is replaced with object-oriented code.
Here, some of the workload is migrated off the mainframe, and some are left on the mainframe with the use of APIs and message queues to integrate the two systems and minimize the costs of the mainframe infrastructure while it is being phased out.
It can take anywhere between three to six months for a small application or a small set of a few closely associated applications. It can take a year or two for a mid-size set of mainframe applications, where we are looking at a hundred or so applications. It can take two to four years for complete decommissioning of a large enterprise-class mainframe system, where we are looking at hundreds of applications and decades of accumulated batch workloads.
As a rough cost estimate, at engineering rates prevalent in Eastern Europe, a small scope can range between $50,000 and $150,000, a mid-size scope can range between $150,000 and $500,000, and a large enterprise scope can be above $500,000.