ETZ Global

Case Study. Proof of Concept migration to AWS for Southco

Case Study. Proof of Concept migration to AWS for Southco

07 December 2021

About Southco

Southco, Inc. is a leading global designer and manufacturer of engineered access solutions, from quality and performance to aesthetics and ergonomics.

For over 70 years, Southco has helped the world’s most recognized brands create value for their customers with innovative access solutions designed to enhance the touch points of their products in transportation and industrial applications, medical equipment, data centers and more.

With unrivalled engineering resources, innovative products and a dedicated global team, Southco delivers the broadest portfolio of premium access solutions available to equipment designers throughout the world.

The Challenge

Southco had been using SAP since 2007 in a single global instance with 13 languages and 26 company codes, supporting 1500 SAPGUI users and 1200 Internet Sales users. 

The environment was under contract at a hosting partner that could no longer support their existing version beyond a particular date. As such, Southco considered a cloud-to-cloud migration to AWS with an eye towards a subsequent S/4HANA brownfield conversion post migration.

Southco needed to understand the potential effort required to convert their ECC system to S/4 HANA. This required running SAP’s S/4HANA Readiness check on Southco’s production system, and planning indicated that this would best be performed on a PoC copy of this system that had been migrated to an AWS Virtual Private Cloud (VPC).

This project would prove three things:

1. The feasibility, method, and effort required to perform a cloud-to-cloud migration to AWS

2. The patching required to update the ECC system to a level where the SAP Readiness check tools could be applied

3. Demonstrate the flexibility and performance offered by AWS

Tim Watkins, Solutions Architect, ETZ Global

However, this PoC migration introduced several technical challenges, including:

  • Changing the endianness (Big to little endian) of the data from the hosting partner to the AWS supported architecture
  • An aged SAP system that had not been patched or support packs applied to for several years
  • A very large Oracle database of approximately 5Tb in size
  • Several very large tables that required a splitting process during the database export
  • A slow Internet connection at the source data center which prevented transferring database export files over the Internet to the target AWS environment
  • No access at all to the operating system on the source environment for our staff
 

The Approach and Methodology

 

Our approach was to utilize a combination of SAP and AWS tools and methods to perform the migration.

The database export

We decided that the best approach would be to use SAP standard copy methods to perform the migration, and selecting the endian conversion option to ensure compatibility with the target architecture. As there were several very large tables, we would use a split table method during the export. We also requested a separately commissioned filesystem to be attached directly to the database server as opposed to an NFS share.

The copy to AWS

 

AWS Snowball

We decided on using an AWS Snowball device, which is a secure, rugged physical storage device that allows for the transfer of large quantities of data into and out of AWS. This would require the installation of the Snowball device in the hosting partner’s data center, which would allow us to copy the large database export files to it. Once this was done it would have to be detached safely, and then couriered back to AWS for the export files to be ingested to an AWS S3 bucket.

The database import

On the target AWS VPC, we would create an AWS Instance with appropriately sized volumes required to host the imported database, and as we would with the export, we would employ the SAP standard copy methods to import the database to the target instance.

The first corruption problem

Unfortunately, after completing the entire export and data transfer process with the Snowball device spanning several days, we discovered a problem during the import of the data files. At some stage during the process, corruption was introduced and a successful import to the target was not possible.

The corruption had many potential causes during the lengthy and complicated transfer and at any one of the following stages:

  1. Database export creation by SAP export tools
  2. Copy of the export data to the AWS Snowball device
  3. Physical transfer of the Snowball device to AWS
  4. Copying the data files from the Snowball device to the AWS S3 bucket
  5. Copying of the data files from the AWS S3 bucket to the EBS volume attached to the target EC2 instance
  6. The import of the data by the SAP R3load import tools

Troubleshooting and solution search

Our approach in these scenarios is to identify common causes and eliminate the obvious. The cause of the corruption could be grouped into two specific categories:

  1. SAP tool compatibility, causing corruption during the export process
  2. File copies to and from the filesystems, Snowball device, and S3 Bucket, and introducing corruption during any one of these activities

SAP Tool Compatibility

We contacted SAP via a raised incident to get their advice. SAP in these circumstances, always recommend using the latest versions of their tools (the kernel in this instance), specifically on the target system as the tools are downwardly compatible. Due to kernel compatibility issues with the aged SAP application on the source, we were limited to using the latest patch on the source system, however SAP indicated that even this was no longer supported (and according to them the likely cause of the corruption issue) and that we would need to go to a later version.

Unfortunately, the later kernel version incompatibilities with the application layer rendered critical transactions unusable. Correcting this would require extensive fixes applied in the form of SAP Note corrections, and for these fixes to be transported across the customer landscape to production in order to ensure consistency. This was deemed impractical, as it would require a mini-project of its own to complete, and would unacceptably delay the project.

What was required therefore, was a separate copy of the production system to be used in isolation and only for the purposes of a system export. We could then upgrade the kernel to a level recommended specifically by SAP and we could apply any necessary notes to eliminate the incompatibilities between the new kernel and the application.

We could then perform another export safe in the knowledge that we were using the tool version specified by SAP themselves.

File Copies

To mitigate corruption being introduced during file copies, we decided to copy the export files twice to the Snowball device via two methods:

  1. A straight copy of each individual export file
  2. Creating what is known as a tar ball, combining all export files into a single ‘tar’ file to be copied in one go

We would also convince a technical resource at the source hosting partner to go through the very onerous task of providing checksums on each individual file so that we could compare the checksums once the files were received. This would allow us to determine whether any file had been altered (or ‘corrupted’) during transit.

The second corruption problem

Unfortunately, despite taking all necessary precautions, using SAP recommended tool versions, and several days of transporting the AWS Snowball device to and from the source data centre, yet again we encountered a corruption issue during the import.

Everything had been checked with a fine-toothed-comb: the checksums on the target files were identical to the source files. We tried both copies that had been made to the Snowball device.

At this stage we were convinced that the corruption issue had not been caused by any of the file copies, but had instead been caused during the export of the data on the source system by the SAP tools themselves.

A plan of action

We could not prove it at that stage, but we had to come up with a plan of action that would allow us to troubleshoot and test imports at the source before transferring the export files to AWS. Troubleshooting various export/import methods whilst having to transport an AWS Snowball device back and forth, taking weeks at a time, would prove very difficult and extremely time consuming. Not to mention frustrating to us, the customer and to everyone involved in the effort.

Troubleshooting and testing imports at the source sounds much easier than in practice. We could not launch a server in the source data center with the same operating system that would be used in AWS. Their architecture was limited to Big Endian operating systems.

Another plan had to be made, and so we decided to use a different sort of Snowball device, called a Snowball Edge device. This is essentially almost a full AWS cloud environment, but on a transportable physical device. This allowed us to configure full test instances with their own operating systems and databases, in advance, ship it to the source data center and connect it to their network. Because it was ‘our’ device albeit connected to the source hosting partner network, we had full and unencumbered access to all the instances installed on it. We could test and troubleshoot as often as possible until we found a solution.

The solution

As is the case in many difficult technical scenarios, the solution turned out to be very simple. But it required left field thinking and a counter intuitive experiment before we stumbled on the answer.

We tried everything: different SAP tool versions on source and target, different database client versions. We tried with and without table splitting. None of the test imports worked on the Snowball Edge instances in the source data centre despite all our efforts.

A laboratory experiment

At this stage we were not sure whether the SAP tools worked at all with this set of source and target platforms. We set up a separate lab environment that closely matched the source data centre architecture, and installed clean SAP systems without any large volumes of customer data.

To our surprise, we had no issue with any of the exports at any version of the SAP tools. No corruption was introduced at all. So we knew then that the tools worked, but why not at the customer?

A left-field idea

Perhaps it was combination of huge quantities of data and an endian conversion process that corrupted the exports, no matter what version of the SAP tool we used. We had to try an export without selecting a conversion process, but that meant that we would have to test an import back to the original (Big-endian) source system. Using one of the instances we created on the Snowball Edge device was not possible for this test as these were Little-endian systems.

Yet another export of this very large system was performed, making sure we did not select the endian conversion option.

And this is when we had a left field idea.

Why not try import the unconverted (Big-endian) data to an instance on the AWS Snowball Edge device and see what happens. They were experimental systems and could be destroyed if needed, and we had some time on our hands while waiting for our counterpart at the source data center to prepare the system for the import.

Success at last

We had nothing to lose, if it didn’t work it didn’t work. We didn’t expect it to anyway

But it did!

Tim Watkins, Solutions Architect, ETZ Global

How was it possible to import data from big-endian architecture without a conversion process to little-endian successfully? It made little sense until we consulted with SAP regarding our findings.

SAP implied that despite the option being available during the export process, it should not make any difference whether you select the endian conversion option or not. The export is in a raw format that can be imported regardless of endianness. However, during an export with very large datasets, if that option to change endianness is selected, corruption does occur on a very consistent basis.

Armed with this knowledge, we promptly packed up the Snowball device with the export that did not include an endian conversion and shipped it back to AWS with a renewed enthusiasm to finally have a successful PoC copy create in AWS.

The import was a success. We were able to subsequently install the SAP S/4HANA Readiness check tools and prepare it for an analysis on the effort to convert it.

Request A Consultation

ETZ Global Form

Latest News