Quantcast
Channel: SCN : Blog List - Software Logistics
Viewing all 40 articles
Browse latest View live

System Copy and Migration Observations

$
0
0

There are many blogs and documents available describing how to best migrate your SAP system to HANA. This isn't one of those.

 

What this is, on the other hand, is a few observations, and some lessons learned, when migrating an ERP system to new hardware using the R3load, aka Export/Import, method of system copy. The overall process is well-described in the official System Copy Guide and in numerous documents available on SCN, so I won't go into that detail here. What is not well-described, however, is how to go about choosing some of the parameters to be used during the export and import -- specifically, the number of parallel processes. First, however, let's address some background confusion prevalent among many customers.

 

 

Homogeneous or Heterogeneous?

One point that seems to come up, time and time again, in questions posted to SCN is about whether a homogeneous system copy is allowed in the case of a database or operating system upgrade.

 

The answer is yes.

 

If you are upgrading your operating system, for instance from Windows Server 2003 to Windows Server 2012 R2, you are not changing your operating system platform. Therefore, this remains a homogeneous system copy (yes, you should be using system copy as part of a Windows operating system upgrade, as an in-place upgrade of the OS is not supported by either Microsoft nor SAP if any non-Microsoft application (i.e., your SAP system) is installed, except in special circumstances which generally do not include production systems).

 

If you are upgrading your database platform, for instance from SQL Server 2005 to SQL Server 2012, you are not changing your database platform, and so, again, this is a homogeneous system copy. It is possible and acceptable to upgrade SQL Server in place, although you might consider following the same advice given for a Windows OS upgrade: export your SAP system (or take a backup of the database), then do a clean, fresh install of the OS and/or DBMS and use SWPM to re-import your database while reinstalling SAP.

 

You are only conducting a heterogeneous system copy if you are changing your operating system, database platform, or both, i.e. from Unix to Windows or Oracle to SQL Server. Or migrating to HANA.

 

  • Homogeneous: source and target platforms are the same (although perhaps on different releases).
  • Heterogeneous: source and target platforms are different.

 

Export/Import or Backup/Restore?

The next question that often arises is whether an Export/Import-based migration or Backup/Restore-based copy is preferred. These methods sometimes go by different names:

 

Export/Import is sometimes called R3load/Migration Monitor based or Database Independent (in the System Copy Guide). Because this method is not reliant on database-specific tools, it is the only method that can be used for heterogeneous copies. However, it can also be used for homogeneous copies.

 

Backup/Restore is sometimes called Detach/Attach, or Database Dependent (in the Guide), or even just Homogeneous System Copy (in the SWPM tool itself). This method relies heavily on database-specific tools and methods, and therefore it can only be used for homogeneous copies.

 

If you are performing a heterogeneous system copy, then you have no choice. You must use the Export/Import method. If you are performing a homogeneous system copy, you may choose either method, but there are some definite criteria you should consider in making that choice.

 

Generally speaking, for a homogeneous system copy, your life will be simpler (and the whole procedure may go faster) if you choose the Backup/Restore method. For a SQL Server-based ABAP system, for instance, you can make an online backup of your source database without having to shut down the SAP system, which means there is no downtime of the source system involved. Copy the backup file to your target system, restore it to a new database there, then run SWPM to complete the copy/install. This is great when cloning a system for test purposes. Of course, if the goal is to migrate the existing system to new hardware, then downtime is inevitable, and you certainly don't want changes made to the source system after the backup.

 

The Detach/Attach variant of this method is probably the fastest overall, as there is no export, import, backup, or restore to be performed. However, downtime is involved. You shut down the source SAP system, then use database tools (SQL Server Management Studio, for instance), to detach the database. Then you simply copy the database files to your target system, use database tools again to attach the database, then run SWPM on the target to complete the copy/install.

 

By comparison, the Export/Import method involves shutting down the source SAP system, then using SWPM to export the data to create an export image (which will likely be hundreds of files, but will also be considerably smaller than your original database), then using SWPM again on the target system to install SAP with the export image as a source. Lots of downtime on the source, and generally speaking a more complex process, but much less data to move across the network.

 

Obviously I am a big fan of using the Backup/Restore or Detach/Attach database-dependent method for homogeneous system copies, and in most cases, this is what I would advise you to choose.

 

When You Should Choose Export/Import

There is one glaring disadvantage to the Backup/Restore method, however. This method will make an exact copy of your database on your target system, warts and all. Most of the time, that isn't really an issue, but there are circumstances where you might really wish to reformat the structure of your database to take advantage of options that may not have been available when you originally installed your SAP system, or perhaps to make up for poor choices at the time of original install that you would now like to correct. Well, this is your big opportunity.

 

What are some of these new options?

  • Perhaps you are migrating to new hardware, with many more CPU cores than available on the old hardware, and you see this as a prime opportunity to expand your database across a larger number of files, redistributing the tables and indexes across these files, thus optimizing the I/O load. Backup/Restore will create a target database with the same number of files as the source, with the tables distributed exactly as they were before. You can add more files, but your tables will not be evenly redistributed across them. Export/Import, on the other hand, doesn't care about your original file layout, and gives the opportunity to choose an entirely new file layout during the import phase.
  • Perhaps you are upgrading your DBMS and would like to take advantage of new database compression options. Yes, you can run MSSCOMPRESS online after upgrading to a platform that supports it, but this can have long runtimes. SWPM will, however, automatically compress your database using the new defaults during the import, assuming your target DBMS supports these defaults, so you can achieve migration and compression in a single step. Compression does not add any extra time to the import.

 

Parallel Processing During Export and Import

At the beginning of the export and the import in the SWPM tool, there is a screen where you are asked to provide a Number of Parallel Jobs. The default number is 3. This parameter controls how many table packages can be simultaneously exported or imported, and obviously it can have a huge impact on overall runtime. The System Copy Guide does not give much in the way of advice about choosing an appropriate number, and other documentation is sparse on this topic. Searching around SCN will bring up some old discussion threads in which advice is given ranging from choosing 1 to 3 jobs per CPU, and so forth, but it is difficult to find any empirical data to back up this advice.

 

This is an area needing more experimentation, but I can share with you my own recent experience with this parameter.

 

Export on Old Hardware

I exported from two different QAS machines, both using essentially identical hardware: HP ProLiant DL385 Gen1 servers, each with two AMD Opteron 280 2.4 GHz Dual-Core CPUs (a total of 4 cores, no hyperthreading) and 5 GB of RAM, running Windows Server 2003 and SQL Server 2005. I think you can see why I wanted to get off these machines. The application is ERP 6.04 / NetWeaver 7.01 ABAP. The databases were spread across six drive volumes.

 

Export 1: 3 Parallel Processes on 4 Cores

The first export involved a 490 GB database, which SWPM split into 135 packages. I hadn't yet figured out what I could get away with in terms of modifying the number of export jobs involved, so I left the parameter at the default of 3. The export took 8 hours 25 minutes. However, the export package at the end was only 50.4 GB in size.

 

Export 2: 6 Parallel Processes on 4 Cores

By the time I got around to the second export I had learned a thing or two about configuring these jobs. This time the source database was 520 GB, and SWPM split it into 141 packages. I configured the export to use 6 processes. During the export I noted that CPU utilization was consistently 90-93%, so this was probably the maximum the system would handle. This time the export took 6 hours 28 minutes, a two-hour reduction. As most of the time was spent exporting a single very large table in a single process, thus not benefiting at all from parallelization, I probably could have reduced this time considerably more using advanced splitting options. The resulting export package was 57.6 GB in size.

 

Import on New Hardware

The target machines were not identical to each other, but in both cases the target OS/DBMS was Windows Server 2012 R2 and SQL Server 2012. Both databases would be spread across eight drive volumes instead of the previous six.

 

Import 1: 3, then 12, then 18 Parallel Processes on 12 Cores

The target of my first export, and thus first import, was an HP ProLiant BL460c Gen8 with two Intel Xeon E5-2630 v2 2.6 GHz six-core CPUs with hyperthreading and 64 GB of RAM. Yeah, now we're talking, baby! Twelve cores, twenty-four logical processors, in a device barely bigger than my laptop.

 

At the start of this import, I still didn't really have a handle on how to configure the parallel jobs, so as with the matching export, I left it at the default of 3. After all the DEV system I had migrated earlier didn't take that long -- but the DEV system had a considerably smaller database.

 

Five hours into the import I realized only 60 of the 135 packages had completed, and some quick table napkin calculations indicated this job wasn't going to be finished before Monday morning when users were expecting to have a system. I did some research and some digging and figured it would be safe to configure one import job per core. However, I really didn't want to start all over from scratch and waste the five hours already spent, so with a little more experimentation I found a way to modify the number of running jobs while the import was in process, with immediate effect. More on this in a bit.

 

So first I bumped the number of parallel jobs from 3 to 12, and immediately I saw that the future was rosier. I monitored resource usage for a while to gauge the impact, and I saw CPU utilization bouncing between 35% to 45% and memory utilization pegged at 46%. Not bad, it looked like we still had plenty of headroom, so I again bumped up the processes, from 12 to 18. The overall import job took another impressive leap forward in speed, while CPU utilization only rose 2-3% more and memory utilization didn't change. It's entirely possible this machine could have easily handled many more processes, but I had seen an anecdotal recommendation that the parallel processes should be capped at 20 (I'm not sure why, but there is some indication that much beyond this number and the overall process may actually go slower -- but again, that may only be true for older hardware), and in any case all but one import package finished within minutes after making this change.

 

The final package took an additional three hours to import by itself. This was PPOIX, by far the largest table in my database at 170 GB (I have since talked to Payroll Accounting about some housecleaning measures they can incorporate), and thus without using table splitting options this becomes the critical path, the limiting factor in runtime. Still, I had gained some invaluable experience in optimizing my imports.

 

My new database, which had been 490 GB before export, was now 125 GB after import.

 

Import 2: 12 Parallel Processes on 8 Cores

The target of my second export, and thus second import, was also an HP ProLiant BL460c, but an older Gen6 with two Intel Xeon 5550 2.67 GHz quad-core CPUs with hyperthreading and 48 GB of RAM. Maybe not quite as impressive as the other machine, but still nice with eight cores, sixteen logical processors.

 

Based upon my experience running 18 processes on 12 cores, a 1.5:1 ratio, I started this import with 12 processes. I noted CPU utilization at 60-75% and memory utilization at 49%. Still some decent headroom, but I left it alone and let it run with the 12 processes. Despite seemingly matched CPU frequencies, the Gen6 really is not quite as fast as the Gen8, core for core, due to a number of factors that are not really the focus of this blog, and to this I attributed the higher CPU utilization with fewer processes.

 

This time, 140 of my 141 packages were completed in 2 hours 4 minutes. Again, PPOIX consumed a single import process for 6-1/2 hours by itself, in parallel with the rest of the import, and thus the overall import time was 6 hours 32 minutes. Next time I do this in a test system, I really will investigate table splitting across multiple packages, which conceivably could get the import time down to not much more than two, perhaps two and a half hours, or perhaps even much less should I be willing to bump up the process:core ratio to 2:1 or even 3:1.

 

The source database, 520 GB before export, became 135 GB after import on the target. Yeah, I'm quite liking this compression business.

 

Max Degree of Parallelism

In addition to adjusting the number of parallel jobs, I temporarily set the SQL Server parameter Max Degree of Parallelism (also known as MAXDOP) to 4. Normally it is recommended to keep MAXDOP at 1, unless you have a very large system, but as explained in Note 1054852 (Recommendations for migrations using Microsoft SQL Server), the import can benefit during the phase where secondary indexes are built with a higher level of parallelism. Just remember to set this back to 1 again when the import is complete and before starting regular operation of the new system.

 

Minimal Logging During Import

The other important factor for SQL Server-based imports is to temporarily set trace flag 610. This enables the minimal logging extensions for bulk load and can help avoid situations where even in Simple recovery mode the transaction log may be filled. For more details see Note 1241751 (SQL Server minimal logging extensions). Again, remember to remove the trace flag after the import is complete.

 

Adjusting Parallel Processes During Import

During Import 1 I mentioned that I adjusted the number of processes used from 3 to 12 and then to 18 without interrupting the import. How did I do that? There is a configuration file that SWPM creates using the parameters you enter at the beginning called import_monitor_cmd.properties. The file can be found at C:\Program Files\sapinst_instdir\<software variant>\<release>\LM\COPY\MSS\SYSTEM\CENTRAL\AS-ABAP (your path may be slightly different depending upon options you chose, but it should be fairly obvious). Within the properties file you will find the parameter jobNum. Simply edit this number and save the file. The change takes effect immediately.

 

Conclusions

How many parallel processes to choose is not a cut-and-dried formula. Generally, it seems that a ratio of processes to cores between 1.5:1 and 3:1 should be safe, but this will depend on the speed and performance of your CPU cores and general system hardware. On the Gen1 processors, 1.5:1 pegged them to over 90% utilization. On the Gen8 processors, 1.5:1 didn't even break 50%, while the Gen6 fell somewhere in between. The only way to know is to test and observe on representative hardware.

 

There is also a memory footprint for each parallel process, but with anything resembling modern hardware it is far more likely you will be constrained by the number of CPU cores and not the gigabytes of RAM. Still, a number I have seen mentioned is no more than 1 process per 1/2 GB of RAM.

 

I have seen a suggestion of a maximum of 20 processes, but the reasons for this suggestion are not clear to me, and I suspect this number could be higher with current hardware.

 

If you have one or more tables of significant size, it is worthwhile to use the package splitter tool (part of SWPM) to break them up into multiple packages so that they can benefit from parallelization.

 

Thanks for following along, and hopefully you will find the above useful. If you have your own experiences and observations to add, please do so in the comments.


SUM, SPAM/SAINT and the story about Support-Package Levels

$
0
0

You have heard (or read in SAP Note 2039311) that SUM does no longer allow "manually" increasing the Support Package (SP) - Level for central components like SAP_BASIS, SAP_ABA, SAP_APPL, SAP_HR and SAP_BW during a SUM run. This blog shall explain the reason for this.

 

Situation

Products with several software components inhere complex dependencies that have to be considered for a maintenance activity (applying SPs, SP Stacks, implementing enhancement packages, or upgrades). The Maintenance Optimizer is the central tool in SAP Solution Manager to plan the maintenance, considering the dependencies, and it offers only valid SP-combinations. The result is the stack.xml as a kind of recipe for the Software Update Manager (SUM) to apply the changes on the system.

Until SUM 1.0 SP12, it was possible to increase the SP-level for software components on a SUM dialog during the BIND_PATCH phase, and thus kind of overruling parts of the stack.xml. Now with SUM 1.0 SP13, this option is no longer possible for central components as listed in SAP Note 2039311 (central note for SUM 1.0 SP13).

 

SUM or SPAM/SAINT

For applying only few SPs, it was possible to use either SUM or SPAM/SAINT. SAP note 1803986 provides a tool comparison with hints when to use which tool. With SAP NetWeaver 7.4, there are some dedicated SP-Stacks that only SUM can apply on the system (see note 1803986). This applies to SAP NetWeaver 7.40 Support Package 05 (SR1) und Support Package 08 (SR2). The reason is that these SPs include changes of the DDIC tools (not only DDIC content) that only SUM can consider, but not SPAM/SAINT. This is accompanied by a new kernel version, so some software components are bound to specific kernel versions.

 

SUM being stricter now

SUM checks the kernel requirement provided by the stack.xml, calculates other dependencies and prepares internal buffers. Later, the SUM offers the dialog to adapt the SP levels for software components. If you would now increase the SP level for the central components, you may end up in a situation that this software component SP-level requires a newer kernel, and several SUM internal calculations are invalidated.

 

So how do I …

The Maintenance Optimizer remains the central point, and you will have to plan your maintenance activities with the desired SP-level for the central components from the start.

 

Good news

SUM 1.0 SP13 patch level 2 (which is now available as of june 15th 2015) allows adapting the SP-levels for software component SAP_HR again, as this seams the biggest hurdle. The other central components will not change. Other components than the listed ones are not affected.

 

Boris Rubarth

Product Management Software Logistics, SAP SE

DMO: background on table split mechanism

$
0
0

This blog explains the technical background of table split as part of the database migration option (DMO).

As a prerequisite, you should have read the introductionary document about DMO: Database Migration Option (DMO) of SUM - Introduction and the technical background in DMO: technical background.

 

During the migration of application tables, the migration of big tables might dominate the overall runtime. That is why SAPup considers table splitting to reduce the downtime of the DMO procedure. Table splitting shall prevent the case that all tables but a few have been migrated, and that only a small portion of all R3load processes these remaining (big) tables processes. The other R3load processes would be idle (to be more precise: would not run), and the long tail processing of the big tables would increase the downtime unnecessarily. See figure 1 below for a schematic view.

 

Fig_01.jpg

SAPup uses the following approach to define the tail: if the overall usage of R3load pairs (for export and import) drops below 90 %, SAPup handles all tables that are processed afterwards as being part of the tail (see figure 2 below).

 

Fig_02.jpg

During the configuration of the DMO procedure, you will configure a number of R3load processes, which determines the number of R3loads that may run in parallel. This explanation will talk about R3load pairs that are either active of idle, which is rather a virtual view. If an R3load pair did execute one job, it will not wait in status idle, but end. SAPup may then start another R3load pair. Still for the discussion of table split, we consider a fixed number of (potential) R3load pairs, which are either active or idle. The following figure 3 illustrates this view.

 

Fig_03.jpg

Prerequisites

To follow this blog, you have to be familiar with the basics of DMO, and with the DMO R3load mechanism, as discussed in the SCN blogs Database Migration Option (DMO) of SUM – Introduction and DMO technical background.

 

Automatic table splitting

SAPup will automatically determine the table split conditions, and there is no need and no recommendation to influence the table splitting. Your task is to find the optimal number of R3load processes during a test run, and provide the table duration files for the next run. (SAPup will use the table duration files to calculate the table splitting based on the real migration duration instead of the table size; see DMO guide, section 2.2 “Performance Optimization: Table Migration Durations”).

You may still want to learn more on the split logic, so this blog introduces some background on table splitting. Note that SAPup will not use R3ta for the table split.

 

Table split considerations

Typically, you will expect table splitting to happen for big tables only, but as we will see, the attempt to optimize the usage of all available (configured) R3load processes may result in splitting other tables as well. Still, splitting a table into too many pieces may result in a bad export performance: lots of parallel fragmented table segments will decrease read performance, and increase the load on the database server. A table may be big, but as long as has been completely processed before the tail processing starts, there is no reason to split that table. That is why the tool will calculate a minimum of table splits to balance all requirements.

The logic comprises four steps: table size determination, table sequence shuffling, table split determination, and assignment to buckets. A detailed explanation of the steps will follow below. During the migration execution, SAPup will organize tables and table segments in buckets, which are a kind of work packages for the R3load pair to export and import. During the migration phase, each R3load pair will typically work on several buckets, one after the other.

 

Step 1: Sorting by table size

SAPup will determine the individual table sizes, and then sort all tables descending by size.

In case you provide the table duration file from a previous run in the download folder, SAPup will use the table migration duration instead of the table size.

 

Fig_04.jpg

Assuming we only had sixteen tables, figure 4 above shows the sorted table list. The table number shall indicate the respective initial positioning in the table list.

 

Step 2: Shuffle table sequence

Migrating the tables in sequence of their size is not optimal, so the table sequence is reordered (“shuffled”) to achieve a good mixture of bigger and smaller application tables. Figure 5 below tries to illustrate an example.

 

Fig_05.jpg

SAPup uses an internal algorithm to shuffle the table sequence, so that table sizes alternate between bigger and smaller.

 

Step 3: Table split determination

SAPup will now simulate table splitting, based on the number of configured R3load processes. Note that changing the number of configured R3load processes later during the migration phase will affect the runtime of the procedure.

For the simulation, SAPup will work on “slots” that represent the R3load pairs, and will distribute the tables from the shuffled table list into these slots. Note that these R3load “slots” are not identical to the buckets. SAPup will use buckets only at a later step. A slot is a kind of sum of all buckets, which are processed by an R3load pair.

Initially, the simulation will assign one table from the shuffled table list into each slot until all slots are filled with one table. In an example with only eight R3load pairs, this means that after the first eight tables, all slots have a table assigned, as shown in figure 6 below.

 

Fig_06.jpg

In our example, SAPup has filled all slots with one table, and the second slot from above has the smallest table, so it has the fewest degree of filling.

Now for all following assignments, SAPup will always assign the next table from the list to the slot that has the fewest degree of filling. In our example, SAPup would assign the next table (T7) to the second slot from top. After that, SAPup will most probably assign the next table (T9) to the first slot, see figure 7 below (sounds like Tetris, doesn’t it?).

 

Fig_07.jpg

Finally, SAPup has assigned all tables from the shuffled table list to the slots, as shown in figure 8 below. Note that the figures are not precise in reflecting the table sizes introduced in figure 4 and 5.

 

Fig_08.jpg

As last part of this simulation run, SAPup will now determine which tables to split. The goal is to avoid a long tail, so SAPup will determine the tail, and split all tables that are part of the tail.

SAPup determines the tail by the following calculation: SAPup sorts the slots by filling degree, and the tail begins at the point in time at which the usage of all R3load pairs is below 90%. All tables that are part of the tail – either completely or partially – are candidates for a split, as shown in figure 9 below. As an example, table T2 is shown as being part of the tail.

 

Fig_09.jpg

SAPup determines the number of segments into which the table will be split by the degree by which the table belongs to the tail. The portion of the table that does not belong to the tail is the scale for the table segments to be created. For the example of table T2, this may result in three segments T2/1, T2/2, and T2/3.

SAPup will now extend the shuffled table list by replacing the detected tables by its table segments. Figure 10 shows the example with three segments for table T2.

 

Fig_10.jpg

SAPup starts the next iteration of the simulation, based on the shuffled table list with table segments.

If the calculated tail is negligible (lower than a specific threshold) or if the third simulation has finished, SAPup will continue with step 4.

 

Step 4: Table and table segments assignment to buckets

The result of step 3 is a list of tables and table segments whose sequence does not correlated to the table size, and which was optimized to fill all R3load slots with a small tail. Now SAPup will work with buckets (work packages for R3load pairs) instead of slots. This is a slightly different approach, but as the filling of the buckets will use the same sequence of tables before, the assumption is that it has the same result.

 

SAPup will assign the tables of this list to the buckets in the sequence of the list. The rules for this assignment are

  1. A bucket will get another table or table segment assigned from the list as long as the bucket size is lower than 10 GB.
  2. If the next table or table segment is bigger than 10 GB, the current bucket is closed, and SAPup will assign the table or table segment to next bucket.
  3. SAPup will put segments of a split table into different buckets– otherwise two table segments would reside in one bucket, which neutralizes the desired table split.

The first rule results in the effect that a bucket may have more table content than 10 GB. If a table of e.g. 30 GB was not determined for a split, the respective bucket will have this size.The second rule may result in the effect that a bucket is only filled to a low degree, if the following table / table segment was bigger than 10 GB so that it was put into the following bucket.The third rule results in the effect that e.g. for a table with four segments of size 5 GB each, several buckets will have a size of 5 GB. Figure 11 below tries to illustrate this with some examples.

 

Fig_11.jpg

Now SAPup has defined the distribution of tables and table segments into buckets, which in turn are part of a bucket list. All this happens during the phase EU_CLONE_MIG_DT_PRP for the application tables (and during phase EU_CLONE_MIG_UT_PRP for the repository). Note that the DT or UT part of the phase name is no indication whether or not the phase runs in uptime (UT) or downtime (DT): EU_CLONE_MIG_DT_PRP runs in uptime.The migration of application tables happens in downtime during phase EU_CLONE_MIG_DT_RUN. During the migration phase, SAPup will start the R3load pairs and assign the next bucket from the bucket list. As soon as an R3load pair is finished (and closes), SAPup will start another R3load pair and assign the next bucket to this pair, as shown in the following figure 12.

 

Fig_12.jpg

Relevant log files are

  • EUMIGRATEDTPRP.LOG: tables to split, number of buckets, total size
  • EUMIGRATEDTRUN.LOG: summary of migration rate
  • MIGRATE_DT_RUN.LOG: details like R3load logs

 

Additional considerations

Typically, each R3load pair will execute more than one bucket. Exceptions may happen for small database sizes. As an example, for a total database size of 9992.3 MB and 20 R3load pairs (so 40 configured R3load processes), the tool would reduce the bucket size to put equal load to all R3load pairs. The log will contain a line such as “Decreasing bucket size from 10240 to 256 MB to make use of 160 processes”. Below you see the respective log entry in EUMIGRATEUTPRP.LOG:

 

1 ETQ399 Total size of tables/views is 9992.3 MB.

2 ETQ399 Decreasing bucket size from 10240 to 256 MB to make use of 20 processes.

1 ETQ000 ==================================================

1 ETQ399 Sorting 10801 tasks for descending sizes.

1 ETQ000 ==================================================

1 ETQ399 Distributing into 20 groups of size 500 MB and reshuffling tasks.

1 ETQ000 ==================================================

CTS+ or HTA?

$
0
0

You might have noticed that there are now two options to transport SAP HANA objects via ABAP: SAP HANA transport for ABAP and the enhanced Change and Transport System (CTS+).

 

If you do not know about these options, yet, please refer to the following documentation and presentations:

 

After having gone through these options, you might now ask yourself: Can I use SAP HANA transport for ABAP to transport my SAP HANA objects via CTS?

The answer is: yes, you can – but you should only do so for a special use case.

 

At first, please think about your SAP HANA systems. How are they set up?

Do you use SAP HANA systems in a stand-alone set-up? Then you should use CTS+ for transporting your SAP HANA objects (or native transports via SAP HANA application lifecycle management).

Details about these options are provided in here: http://help.sap.com/saphelp_hanaplatform/helpdata/en/88/f1de06b2be4239b71e3aed03e1a617/frameset

 

Do you use ABAP systems with an SAP HANA database as primary database? Then you should use the SAP HANA transport for ABAP.

 

But let’s have a closer look at the different types of SAP HANA applications that might exist on your systems:

  • Do you develop SAP HANA applications which are closely related to ABAP development objects or rely on them? Then SAP HANA transport for ABAP is the right choice. You can have the ABAP and SAP HANA objects in one transport request. You can assign the SAP HANA packages to the ABAP package that you already use for your ABAP development. Transaction SCTS_HTA offers both, the synchronization (and, with this, the transport) of complete packages or of individual (changed) objects.
  • Do you develop native SAP HANA applications (using the SAP HANA repository) which do not have any relation to tables or views that exist on the ABAP side, but nevertheless run on the SAP HANA Database of and existing ABAP system? Then you have two options: you can use CTS+ or SAP HANA transport for ABAP. Both options are valid:
    • CTS+ is a good option, if you already work with CTS+ for other applications or come from a single SAP HANA system and now consolidate your systems. The developers working on the SAP HANA applications can continue working in the way they did in the past. They might only have to get used to a new SID. With CTS+, you can use change recording which is part of SAP HANA application lifecycle management. Each developer can work on his changelists.
      From a configuration perspective, you don’t need a separate transport track in transaction STMS. You can re-use the existing ABAP landscape and just add the parameters required for CTS+.
    • HTA is a good option if you started with ABAP development, then you moved on to some SAP HANA for ABAP applications and now you also want to create a native SAP HANA application. You can continue to use the transport mechanisms that you already know: SCTS_HTA for synchronizing the SAP HANA objects, SE09 for managing the transport requests. In this case, there is no need to configure change recording or CTS+. In fact, you should not enable change recording on the SAP HANA side (in SAP HANA application lifecycle management) if you want to use SAP HANA transport for ABAP to transport your modified SAP HANA objects. Keep in mind that with SAP HANA transport for ABAP, you can transport changed objects, but there is no way to find out who changed which object whereas in change recording in HALM, changelists are user-specific. In addition, with SAP HANA transport for ABAP, you always transport the active version of an object that is currently stored in the repository. If you used change recording and HALM, then you would transport the version of the object that is stored in the released changelist.

Decide on the option that suits you best and then stay with it and only use this transport option for your system landscape. Do not use several transport options for one development system. If you have to change the way you transport for one or the other reason, make sure that you do this in a safe way. This means that, if you move away from using change recording in SAP HANA application lifecycle management (HALM), always close all open changelists and transport them (via CTS+ or in native mode – whatever you were using). If you decide to stop using SAP HANA transport for ABAP, make sure that all objects are synchronized. In any case – for any switch of the transport mode – make sure that all transport requests are released and imported into all systems of your landscape.

 

And what if you decide to switch from CTS+ to HTA?

Follow these steps:

  1. Make sure that all of the systems that belong to the ABAP on HANA system landscape are on NW7.40 Sp11 at least
  2. Clean up your CTS+ imports:
    • Make sure that no-one creates new transport requests any more (by informing people and to be safe on a technical level by setting system specific permissions for this CTS+ landscape so that only the administrators involved in the migration can still work on the transport requests for this landscape)
    • Release or delete all open transport requests in DEV
    • Import all transport requests into all systems of your landscape
  3. Delete the CTS+ configuration
  4. Make sure that the transport landscape for the ABAP systems where you would like to use HTA is set up – there is no additional configuration required for HTA
  5. Make sure that all people involved have the require permissions (mainly for transaction SCTS_HTA as this is a new transaction needed for HTA)
  6. Now you can start using HTA

Consider the following before you switch:

  • Be aware that there is no continued import history. As there will be new SIDs in use, you will have one import history for the CTS+ times and another one for HTA.
  • Old transport requests 8created during CTS+ usage) cannot be imported any more – even if you attach them manually to an import queue or use the same SID because the deploy method used for HTA is different from CTS+

Leap Second 2015 occurs at the end of June.

$
0
0

Purpose:

 

On the 30th of June 2015 a leap second will be added, this could have an effect on some Operating Systems and result in high CPU usage.  Leap second already happened in 2012 and 2008, but no issues were observed.

 

For general information from SAP point of view please consult information in notes below:

 

  • 1738172 - Linux: High Machine Load due to Leap Second
  • 1735719 - Leap seconds and the SAP system

 

Additional information:

From  Windows point of view you can find more information in Microsoft knowledgebase :
https://support.microsoft.com/en-us/kb/909614

 

From  Red Hat Enterprise Linux point of view you can find more information in Redhat knowledgebase :
https://access.redhat.com/articles/15145

 

Do you have further questions?

Before creating an SAP incident, you can post a discussion thread in the Software Logistics discussion space.  Community members will provide you with the answers you need.

DMO: introducing the benchmarking tool

$
0
0

This blog introduces the benchmarking tool for checking the migration rate prior to database migration option (DMO). As a prerequisite, you should read the introductionary document about DMO: Database Migration Option (DMO) of SUM - Introduction

 

Tuning the DMO procedure to shorten the downtime is an important task. With Software Update Manager (SUM) 1.0 SP13, we have an additional, important feature, to be added as the very first technique in the list of means to reduce the technical downtime:

How to tune DMO downtime

  1. Before the DMO run: use the benchmarking tool to evaluate the migration rate for an existing system
  2. Before the DMO run: consider to run SUM/DMO on an Additional Application Server (AAS, fka DI) instead of PAS
  3. During the DMO test run: adapt the number of R3load processes to balance the performance of the SAP application server
  4. After a (successful) test run: provide the "table duration files" for the next DMO run to optimize the table split mechanism
  5. If downtime expectations are not met, consider using "downtime optimized DMO", see
    DMO: downtime optimization by migrating app tables during uptime (preview)

 

Scope of benchmarking tool

 

  • The benchmarking tool offers a fast check for possible migration speed prior to the DMO run
  • The source system may continue to run (uptime)
  • You can select specific tables, or use a specific percentage of all tables for the benchmarking run
  • You can benchmark the export from the source system only, or benchmark the export and the import to the SAP HANA DB

 

What is the benchmarking tool

 

  • The benchmarking tool is - trara: the SAPup (as part of the Software Update Manager, SUM)
  • SAPup is triggering R3load for the export and import, like during the DMO run
  • Prerequisite is the download folder, containing the source and target kernel files, like for DMO
  • SAPup will not create a shadow repository, and skip other phases as well,
    that is why the benchmarking run is so fast (and cannot be used for a real migration/system copy)

 

How to start the benchmarking tool

  • Prerequisite is that no DMO run is active, that means:
    if you have started a DMO before, you will have to reset the DMO run, cleanup, and stop all SAPup processes
  • You start the benchmarking tool as the "migtool" option of SUM/DMO with a slightly different URL:
    https://<host>:1129/lmsl/migtool/<SID>/doc/sluigui
  • The SL UI will show dialogs to select the benchmarking options, and the number of R3load processes to be used
  • After the benchmarking run, a dialog will prompt you to analyze the log file to check the migration rate
  • It is not possible to use the benchmarking tool while using the "old DMO UI" (with URL suffix /doc/gui)

 

Things to keep in mind

  • Tuning: you should adapt the number of R3load processes for optimized usage of the application server performance (like in DMO), and then analyze the log file to check the migration rate
  • Naming: the R3loads for "UT" are used for the preparation (determine tables for export), the R3loads for "DT" are used for the export (and import, if selected), so UT and DT are no indication for uptime or downtime (concerning the configuration of R3load processes)
    Minor Issue: up to SUM SP15, the number of UT R3loads are used for DT (so for the export) as well.
  • You will have to fulfill the same requirements for the source database software release as for DMO. If you use a lower database software version, you may get cryptic error messages
  • Uptime or downtime benchmarking: you may consider to shut down the SAP system to tune the optimum number of R3loads for the DMO downtime run
  • Documentation: the benchmarking tool is described in the DMO guide, section "Migration Tools"
  • Migtools: "Migration Tools" are a) the benchmarking tool, and b) the standalone table comparison option of SUM. The latter is used for classical migration, and described in the guide of the Software Provisioning Manager (SWPM)
  • Benchmarking theexport: if you select the option to only export selected data from the source database, SAPup will trigger the R3load with the option "discard", which will not create any files. This allows you to analyze the speed of the export from the source database without creating files, as the DMO run will not create export files either
  • Using the benchmarking tool to do a system copy is not supported by SAP
  • [Added on August 29th] For an oracle database, you will have to provide the BRCONNECT tools in the download folder

 

Boris Rubarth

Product Manager, Software Logistics, SAP SE

DMO news with SUM 1.0 SP13

$
0
0

This blog introduces the news for database migration option (DMO) with Software Update Manager (SUM) 1.0 SP13. As a prerequisite, you should read the introductionary document about DMO: Database Migration Option (DMO) of SUM - Introduction

1. Benchmarking tool for testing migration performance

The benchmarking tool is a new SUM option that offers a kind of test migration, while the system may even be in uptime. This way, you may get a first impression on the possible migration rate, and identify bottlenecks. The benchmarking option is explained in the DMO guide, section 7.3, and in the following blog:  DMO: introducing the benchmarking tool

2. Start release R/3 4.6C

SAP R/3 4.6C is now supported as a start release for DMO (but not R/3 4.7).

Note that the DMO guide contains an important note on manual partitioning of tables for this start release.

3. ROWID-based export for Oracle source database

For Oracle source database only, the DMO procedure will do the export based on the ROWID which shall accelerate the export. ROWID-based export does currently not work for cluster tables.

No configuration required, works out of the box.

 

4. Pipe mode for Windows OS

Until now, the pipe mode for the communication of the R3load pair on the server was restricted to non-Windows platforms; now pipe-mode is used on Windows OS as well. See SCN blog for details on pipe mode: DMO: comparing pipe and file mode for R3load.

No configuration required, works out of the box.

 

5. Run of DMO/SUM on AAS

The execution of SUM (for DMO and non-DMO scenarios) on an Additional Application Server (AAS, fka DI) is supported for scenarios on ABAP systems. For DMO this offers the option to run DMO on an AAS with higher performance, instead of running it on the Primary Application Server (PAS, fka CI).

No specific considerations required, you just extract the SUM archive on the AAS.

 

6. New SL UI

After DMO was the first SUM scenario using a SAPUI5 based user interface, the SL UI (Software Logistics User Interface) was implemented, which will be the common UI for all SUM scenarios, as well as for other SL tools in the future. It uses a slightly different URL with suffix /doc/sluigui (sounds like "Huey Lewis" to me ).

Note that the usage of the SL UI is possible for maintenance scenarios with SUM SP13 now as well (AS ABAP or AS JAVA - not dual stack), but the default UI is still the SDT gui / java based UI.

7. Table duration files location

An important tuning mechanism of DMO is the (hopefully well-known) usage of table duration files. SAPup creates these files after a DMO run, they include the table migration duration, and they can be feed back to the next DMO run to improve the table split. With SUM SP13, these are XML documents (instead of LST files). You can simply put the files into the download folder, and SAPup will consider the files. So no need to configure the SAPup_add.par file with the location of the duration files any more.

DMO: table comparison and migration tools

$
0
0

This blog discusses the table comparison and the migration tools of Software Update Manager (SUM) and database migration option (DMO). As a prerequisite, you should read the introduction document about DMO: Database Migration Option (DMO) of SUM - Introduction

 

Explanation of terms

 

SUM and DMO

Software Update Manager (SUM) has several use cases, one of these is database migration option (DMO). DMO combines the update/upgrade of the SAP software with the database migration to SAP HANA database.

 

Note: DMO is not a tool, it is one of several use cases for SUM.

 

DMO compares number of table rows

DMO will in any case compare the number of rows for each table before and after the migration - we call this count*.

 

DMO may compare content of table rows

As part of the DMO migration, you can optionally switch on the table comparison of DMO. This will - in addition to count* - compare the content of table rows before and after the migration. Comparing the table content will extend the downtime, so it is not recommended to enable table comparison for a DMO run on a productive system.

Note: You may consider enable the table comparison in a productive run as well i.e. for certain tables only, if it is required to prove that the content is identical.

 

SUM may compare content of table rows for classical migration

The SUM offers the new use case "table comparison standalone" that can be used as an optional step for a classical migration.

The classical migration is the heterogeneous system copy offered by the Software Provisioning Manager (SWPM). So for the classical migration as such, the SUM is not used, as for a migration to SAP HANA, you can decide to either go for the classical migration using the SWPM, or go for the DMO using the SUM.

For more information on this topic, see Migration of SAP Systems to SAP HANA.

 

In former times, you could only use the table checker tool to ensure that all tables were completely copied as part of a classical migration (for more information, see SAP Note 2009651). Now, you also have the option to use the table comparision for the classical migration as well, for which you may use the SUM in a specific mode (called migtool, explained below). This is called table comparison standalone  - standalone as it is not part of the DMO run.

 

 

Introducing the migtools of SUM

 

The Migration Tools (migtools) in this context are simply two SUM use cases that are bundled separately, and which are neither part of a standard SUM maintenance nor of a standard DMO migration.

 

Migtools of SUM are:

 

Both options will not be used as part of a DMO run itself; but the benchmarking should be used before starting the DMO run.


Note: the migtools are not really tools, but use cases for SUM.

Note: For classical migrations using the Software Provisioning Manager, there are also migration tools (such as Migration Monitor, splitter tools – and Table Checker, as mentioned above),  which you must not confuse with the migtools of SUM with. For more information about the migration tools of the classical migration, see SAP Note 784118.

 

 

"Compare" <table comparison with DMO> versus <table comparison standalone>

 

Now let us compare the two compare options - still with me?

 

Aspecttable comparison of DMOtable comparison standalone
Use case

 

Table comparison during migration run using DMO of SUM; the dialog "Database Migration Option" offers to enable and configure the table comparison

 

Table comparison during classical migration run using SWPM: SUM has to be started separately as migtool

Focus

 

Same for both:

Tables for which the content shall be compared: either you list specific tables, or a percentage of all tables

Integration


Yes: option is part of DMO


No: not part of DMO; manually execution required for use in classical migration

Drill down
possible


Yes: if difference appears, drill down is done to identify the row with different content


No: only tables are listed that show a difference

Documentation

 

Part of DMO guide (section 7.2)

 

Part of System copy guide (section Table Comparison with Software Update Manager)

 

     Note: both use cases to not export the table content to disk. A checksum is created for each table, these checksums are persisted.

 

Note: there may be scenarios for which only the "old" table checker tool (described in SAP note 2009651)  can be used, e.g. access to source database is no longer possible to generate CRC files with SUM table, but the export files with TOC are still available.


Configuration of export CTS system in HALM and tracing possible issues.

$
0
0

Very often I can see that users have issues by configuring of export CTS system in HALM. There is actually a very good guide about how to configure HANA for CTS. Nevertheless, it is still one of the most reported issues. I prepared a short video which could help you to configure an export CTS system and possibly to trace the problems (if any). Please, take a look.

 

 

Pease be sure that the requirements listed in the note 2097341 are fulfilled, especially regarding roles.

Please also note, that the alias for the Export Web Service point should be specified starting with a slash (i.e. /001/export_cts_ws).

 

 

In the case of issues by registering of a CTS system, you could try a very simple tool helping you to diagnose the outgoing HTTP Destination. The tool itself is attached to the mentioned note. The instructions how to use the tool are also attached.

 

 

References

[1] Change and Transport System

[2] How To Configure SAP HANA for CTS in SAP HANA S... | SCN

ADFS 2.0 Configuration for SAP HANA Cloud Platform

$
0
0

This post contains step-by-step guide, how-to configure Active Directory Federation Services (AD FS) 2.0 with SAP HANA Cloud Platform (HCP).

Overview

The following steps are required to enable AD FS as SAML Identity Provider for an HCP account:

  1. In HCP: Establish trust to AD FS, configure AD FS as Trusted Identity Provider for your HCP account
  2. In AD FS: Establish trust to HCP, configure HCP as Relying Party in your AD FS

Note: When adding the metadata of Identity/Service Provider, you need to select SHA-1 as Signature Algorithm (Secure hash algorithm).

 

In HCP: Establish Trust to AD FS

Step 1: Export SAML Identity Provider (AD FS) Federation Metadata

 

We need to get the ADFS 2.0 federation metadata which is accessible on the following URL:

https://<ADFS2.0 Server Host>/FederationMetadata/2007-06/FederationMetadata.xml

 

(In some cases, you need to be on the ADFS 2.0 Server Host, to access the federation metadata).

This page will list the content of the xml file.
Download the file (ctrl + S or File -> Save) as xml.

 

 

Step 2: Import the AD FS Federation Metadata into your HCP account

 

Open it from https://account.hana.ondemand.com/, then

  1. From the left list menu, navigate to Trust
  2. In the center of the page, navigate to Trusted Identity Provider
  3. Then click on Add Trusted Identity Provider:

 

1-hcp-add-idp.png

   4. Here we upload Federation Metadata by clicking Browse, and navigate to the FederationMetadata.xml file on our host (as downloaded in Step 1).
  Once we select the file, it will automatically fill in all required fields.

 

 

Step 3: Create a Default Group Assignment

 

   5. Then go to Groups tab on the top, where we add default group, which will be assigned for each and every user (we use this, to make sure, ADFS users can access the applications).

2-hcp-upload-metadata-select-groups.png

As the predefined HCP group “Everyone” holds the basic permissions to be assigned for the applications we would like to access, we assign it as default group to the users authenticated via AD FS:

   6. Click Add Default Group to add a default group.

   7. From the dropdown, select the default group “Everyone

   8. Press Save in the bottom right corner, to finally save the Trusted Identity Provider.

 

3-hcp-add-everyone-group-save.png

In AD FS: Establish Trust to HCP

 

Step 1: Export Service Provider (HCP account) Metadata

 

  1. Go to your HCP Account, navigate to Trust
  2. Select Local Service Provider in the center of the page. Usually it is selected by default.
  3. Click Get Metadata and download the xml file. Some browsers might download the file automatically when you click on the link.

4-hcp-get-metadata.png

Step 2: Import Service Provider (HCP account) Metadata (HCP) into your AD FS

 

  1. Open AD FS 2.0 Management and in the left menu navigate to AD FS 2.0
  2. Then TrustRelationships
  3. Then Relying Party Trusts
  4. On the right actions column menu, press Add Relying Party Trust…

 

5-adfs-add-repying-party-trust.png

 

The “Add Relying Party Trust” Wizard will guide through the process:

Begin with the Start button and on the second screen (Select Data Source),

  1. Select Import data about the relying party from a file,
  2. Then press Browse to select the HCP Metadata file (as downloaded in Step 1),
  3. Then Next.

6-adfs-add-metadata.png

 

   4. On the next dialog “Specify Display Name”, select the name of the Relying Party Trust, it will be just a list name, then Next.

   5. On the Choose Issuance Authorization Rules, we select Permit all users to access this relying party and then Next -> Next -> Close.

 

 

Step 3: Create Claim Rule to define the mapping of user ID from AD to HCP

 

When closing the “Add Relying Party Trust” wizard, the “Edit claim rules” wizard will be opened.

If not, you can right click on Relying Party Trust -> Edit claim rules to start it.

 

  1. In the “Edit Claim Rules” window, we go to “Issuance Transform Rules” tab on top
  2. Then Add Rule… .

7-add-rule.png

To define the rule type in the “Add Transform Claim Rule” Wizard, from the dropdown Claim rule template, select Send LDAP Attributes as Claims, then Next.

Then, to specify the rule:

  1. Add the Claim rule name (e.g. “SAN to NameID”),
  2. for Attribute store select Active Directory from the dropdown,
  3. then map LDAP Attribute SAMAccount-Name
  4. to Outgoing Claim Type Name ID,
  5. and press Finish.

8-claim-rule-definition.png

 

Step 4: Change Secure hash algorithm

To change Secure hash algorithm, you have to right click on Relying Party Trust -> Properties and then:

  1. Navigate to Advanced tab
  2. Then change the Secure hash algorithm to SHA-1
  3. Then OK

9-change-secure-hash-algorithm.png

 

Now you should be able to login, using your AD FS users.

SL Toolset 1.0 SPS 14: improved Software Logistics Tools

$
0
0

This blog describes the new and improved tools in the SL Toolset 1.0 with SPS 14.
    You should be familiar with the concept of the Software Logistics Toolset 1.0 ("SL Toolset"), see
      The Delivery Channel for Software Logistics Tools: "Software Logistics Toolset 1.0"

 

 

Overview on tools delivered with SL Toolset 1.0 SPS 14

 

Availability: SL Toolset 1.0 SPS 14 is available since September 15th 2015.

 

What's in:

  • compared with SPS 13, no new tool joined the SL Toolset 1.0
  • existing tools are improved and updated: some tools are delivered in a new SP, some without (when only minor fixes where done)
  • Most of the tools offer a feedback form to provide both statistical data as well as individual feedback

sl_toolset_sps14_tool_overview.jpg

Further information on the SL Toolset SPS 14:

  • SAP Note 2079036 (Release Note for SL Toolset 1.0 SPS 14; logon required)
  • Quick link /sltoolset on SAP Service Marketplace (logon required)
  • Idea Space for the Software Logistics Toolset and its tools

 

 

"nZDM for SAP NetWeaver Java" 1.0 SP 14

 

Offering

  • implement Support Packages and patches for SAP Java-stack systems with minimal technical downtime
  • Target Products:
    • SAP Enterprise Portal 7.02, 7.3x, 7.4
    • SAP Business Process Management and SAP Process Orchestration
      releases 7.3 incl. EHPs, and 7.4 (see SAP Note 2039886; logon required)

Changes with SL Toolset SPS 14

  • nZDM for SAP PO and SAP BPM are general available
  • Added support for SAP HANA as nZDM supported database
  • nZDM command line interface is now offered

More information

 

"nZDM for SAP Process Integration" 1.0 SP 08

 

Offering

  • implement Support Packages for SAP Process Integration (SAP PI) with minimal technical downtime of appr. 30-60 minutes
  • Target Products: SAP PI dual stack 7.10, 7.11, 7.30, 7.31

 

Changes with SL Toolset SPS 14

  • No changes

 

More information

 

 

Software Provisioning Manager 1.0 SP 09

 

Offering

With software provisioning manager, get latest SAPinst version that enables provisioning processes for several products and releases for all supported platforms – get support of latest products, versions and platforms, including latest fixes in tool and supported processes + benefit from unified process for different product versions.

 

Changes with SL Toolset SPS 14

  • Further improvements concerning up-to-date installation, where Software Provisioning Manager now comprises:
    • Option to install further languages as part of the installation
    • Option to apply SAP Notes required by Software Update Manager
  • Option to extract kernel from existing system and use this kernel for installation and system copy
  • SAP Web Dispatcher installation adapted (prompted/set configuration parameters + profile)
  • Option for parallel execution of size determination (R3szchk) of source data during export
    • Now also supported for systems based on SAP NetWeaver 7.0x
  • Option to decluster and depool tables now supported for installation and R3load-based system copy on all database platforms (including SAP MaxDB)
  • Dual-stack split procedure adapted: new option to persist Message Server port configuration from original SCS instance, dual-stack split procedure now also supported for SAP ASE

 

More information

 

 

Software Update Manager 1.0 SP 14

 

Offering

  • Consolidation of different software logistics tools into one unified software logistics tool
  • Runtime reduction: Higher degree of parallelization for certain phase types
  • Downtime reduction: Enhanced Shadow System capabilities for specific use cases
  • Combine SAP system update with migration to SAP HANA (DMO: database migration option)

 

Changes with SL Toolset SPS 14

  • AS ABAP, AS Java: mandatory usage of new SL UI user interface (SAPUI5 based)
  • AS ABAP: buffer import of customer transport requests
  • DMO: support for SAP ASE & MS SQL as target database (available on request)
  • DMO: support for Multi Database Containers (MDC) on SAP HANA DB
  • DMO: support for non-Unicode SAP SCM systems
  • DMO: improved monitoring for R3load usage, and test cycle option for quick migration repeat

 

More information

 

 

Standalone Task Manager for Lifecycle Management Automation 1.0 SP 01


Offering

Stand-alone task manager for lifecycle management automation is a framework to execute below automated configuration templates

  • SSL configuration templates validates the SSL configuration settings both, for ABAP and for Java environments and generates HTML reports that can be used for further analysis. It also performs SSL configuration automatically and describes required manual tasks.(SAP Note 1891360)
  • SAP ERP <-> SAP CRM, template to establish connectivity between SAP ERP system and SAP CRM
  • Mobile Configuration templates for Backend, Gateway and SUP (SAP Note 1891358)
  • HANA user management and SLT (System Landscape Transformation) configuration (SAP Note 1891393)

 

Changes with SL Toolset SPS 14

  • No changes


More information

 

 

SAPSetup 9.0


Offering

SAPSetup offers easy and reliable functionality for installations of different scales:

  • Installation of frontend products without administrator permissions
  • Remote installations from Administration PC
  • Configuration and export of installation packages containing multiple products
  • Consistency check
  • Central log file analysis

 

Changes with SL Toolset SPS 14

  • SAPSetup with the latest corrections as outlined in the SAP Notes below, for example, remote installation re-designed

 

Further information:

 

 

CTS Plug-In 2.0 SP 15


Offering

  • Generic CTS to connect your non-ABAP applications with CTS
  • New user interfaces and new features for CTS
  • Central Change and Transport System (cCTS) as technical infrastructure for Change Request Management (ChaRM) and Quality Gate Management (QGM) in SAP Solution Manager 7.1 SPS 10 and higher

Changes with SL Toolset SPS 14

  • Improvements for CTS Plug-In are no longer delivered with SL Toolset, but will come with the respective SAP NetWeaver support packages, for more information see SAP Note 1665940

More information

 

AddOn Installation Tool and Support Package Manager

 

Offering

SPAM/SAINT provides easy access to lifecycle management processes by being part of the SAP NetWeaver AS ABAP stack and by being accessible directly via SAP GUI.  This way you are able to control different kinds of implementation processes, such as installing, upgrading or updating ABAP software components. SPAM/SAINT Updates themselves can be applied to ABAP-based systems independent of underlying SAP NetWeaver component versions.

 

Changes with SL Toolset SPS 14

  • No changes

 

More information

 

 

Scenario "Up-to-date Installation"

Up-To-Date Installation refers to an enhanced process for the installation of new system at any chosen stack level. The new improved process enables complete planning in Maintenance Planner, which generates a consolidated stack XML and archives, which are later consumed by Software Provisioning Manager and SUM.

See the following blog for more details: Up-To-Date Installation

Know more about Maintenance Planner: Maintenance Planner – The Next Generation Experience for Landscape Maintenance with SAP Solution Manager

 

 

 

Boris Rubarth

Product Management SAP SE, Software Logistics

TMSADM problems

$
0
0

When having setting STMS or running report TMS_UPDATE_PWD_OF_TMSADM you can have problems with user TMSADM in client 000 of the domain controller. Before changing TMSADM password, it is recommended to check document Changing the TMSADM password to make sure all the required notes are applied.

 

SAP Note 1568362 has attached a report to check the TMSADM user. This report identifies any issues with TMSADM before executing report TMS_UPDATE_PWD_OF_TMSADM or configuring STMS.

 

This is an example of the output of the report:

tmsadm.PNG

If one of the checks return a red light, you will have problems during the execution of the TMSADM report or the STMS configuration.

 

Please leave us your feedback and also let us know if you run into any issue when using the tool.

SL Toolset 1.0 SPS 15: improved Software Logistics Tools

$
0
0

This blog describes the new and improved tools in the SL Toolset 1.0 with SPS 15.
    You should be familiar with the concept of the Software Logistics Toolset 1.0 ("SL Toolset"), see
      The Delivery Channel for Software Logistics Tools: "Software Logistics Toolset 1.0"

 

 

Overview on tools delivered with SL Toolset 1.0 SPS 15

 

Availability: SL Toolset 1.0 SPS 15 is available since October 20th 2015.

 

What's in:

  • compared with SPS 14, no new tool joined the SL Toolset 1.0
  • compared with SPS 14, only SUM 1.0 was delivered with a new SP-level


sl_toolset_sps15_tool_overview.jpg

Further information on the SL Toolset SPS 15:

  • SAP Note 2156059 (Release Note for SL Toolset 1.0 SPS 15; logon required)
  • Quick link /sltoolset on SAP Service Marketplace (logon required)
  • Idea Space for the Software Logistics Toolset and its tools

 

 

 

Software Update Manager 1.0 SP 15

 

Offering

  • Consolidation of different software logistics tools into one unified software logistics tool
  • Runtime reduction: Higher degree of parallelization for certain phase types
  • Downtime reduction: Enhanced Shadow System capabilities for specific use cases
  • Combine SAP system update with migration to SAP HANA (DMO: database migration option)

 

Changes with SL Toolset SPS 15

  • Support for scenarios with target system based on SAP NetWeaver 7.5

Note

  • For all other scenarios, SUM 1.0 SP 14 has to be used
  • SUM 1.0 SP 16 will support all scenarios, and will be unique (expected in Q1 2016)

More information

 

 

For all other tools, check the details on the SL Toolset SPS 14 blog:

SL Toolset 1.0 SPS 14: improved Software Logistics Tools

 

 

Boris Rubarth

Product Management SAP SE, Software Logistics

SUM 1.0 SP 14 gets a buddy called SP 15

$
0
0

SUM 1.0 SP 14 and SP 15

 

Recently, we announced the availability of SL Toolset 1.0 SPS 14, which contains Software Update Manager (SUM) 1.0 SP 14 (see SL Toolset 1.0 SPS 14: improved Software Logistics Tools). Now we offer a supplement: for an intermediate period, SUM 1.0 SP 15 accompanies SUM 1.0 SP 14, and both have their own use cases:

  • SUM 1.0 SP 14 for all use cases available until now
  • SUM 1.0 SP 15 for all use cases targeting a system based on SAP NetWeaver 7.5

As the buddies know each other, you will get notified if you start the wrong tool for your scenario. Support and fixes will be provided for both versions.

 

SUM SP 15 is required e.g. for

  • Update or Upgrade to systems based on SAP NetWeaver 7.5 (e.g. from SAP ECC 6.0)
  • System Conversion to SAP S/4HANA on premise (*)

(*) according to current planning, disclaimer applies

 

SUM 1.0 SP 16 will combine all use cases back in one tool, next year.

SUM_SP14_15_16.png

 

SL Toolset 1.0 SPS 15: successor for SPS 14

 

As you know, the SUM is part of the Software Logistics Toolset (SL Toolset) which we deliver approximately three times a year with a new SP-stack. It typically offers updated tools, which is reflected in a new SP-level for the tool, and a new SPS-level for the SL Toolset 1.0.

But we will not offer two versions of the SL Toolset in parallel: SL Toolset 1.0 SPS 15 follows on SPS 14. The SL Toolset is a delivery channel (rather a bundle than a real archive), and SL Toolset 1.0 SPS 15 now offers SUM 1.0 SP 15 (see SL Toolset 1.0 SPS 15: improved Software Logistics Tools).

Therefore, with SL Toolset 1.0 SPS 15, we have a new situation: SL Toolset will only offer an update only for the SUM, and the other tools remain on their SP-level. This is an intermediate situation until we offer SL Toolset 1.0 SPS 16 in Q1 2016 (current planning), with all tools on new SP-levels.

 

 

 

Boris Rubarth,
Product Manager Software Logistics, SAP SE

System Conversion to SAP S/4HANA: SUM is the tool

$
0
0

This blog discusses the technical aspects of a System Conversion:
the transition of an existing SAP Business Suite system to SAP S/4HANA on premise.
The content will be extended step by step.


S/4HANA is a new product line

  • SAP S/4HANA is not a successor of SAP Business Suite, it is a new product line
  • This is why we do not talk about migration to S/4HANA (for a source like SAP ECC 6.0)
  • The terminology is Transition, and there are three options
    • New installation
    • System Conversion
    • Landscape Transformation
  • SAP S/4HANA is available as on-premise or cloud edition
  • The SCN blog from Frank Wagner is an excellent starting point:
    The road to SAP S/4HANA: the different transition paths

 

S4HANA_family_and_paths.png

 

System Conversion: SUM is the tool for the transition to SAP S/4HANA on premise

  • System Conversion is the process of transitioning an existing SAP ECC system to SAP S/4HANA on Premise
  • Software Update Manager (SUM) is the tool for this transition: either with or without DMO
    • If the source is already on SAP HANA DB, it is SUM w/o DMO
    • If the source is not yet on SAP HANA DB, it is SUM w/  DMO
  • This blog only covers the System Conversion to SAP S/4HANA on premise only


SUM 1.0 SP15 for system conversion

  • SUM SP15 is required for all scenarios targeting SAP NetWeaver 7.5 based systems
  • SUM SP15 is available since October 2015
  • SUM SP14 is available since September 2015
  • Both SUM SP14 and SP15 are offered in parallel until SUM SP16 is available (expected in Q1 2016)
  • SUM SP16 will then again be the unique tool for all scenarios, including 7.5 based systems


SUM_is_the_tool.png


One-step or two: depends on Unicode (UC)

  • General rule:
    • If the source system is already on UC, the transition is a one-step procedure
    • If the source system is not yet on UC, the transition is a two-step procedure
  • Exception
    • For SAP R/3 4.7 and SAP ECC 5.0 Unicode systems: these releases do not include the Customer-Vendor-integration (with Business Partners) that is required for SAP S/4HANA, so a two-step approach is required
  • Yes, in general DMO may include the UC Conversion - but not for target NW 7.5:
    • NW 7.5 is only UC: no non-UC kernel (and no non-UC export load) available
    • DMO builds up a shadow system for the source system on the new release
    • For non-UC systems, this shadow system has to be non-UC -> not possible for NW 7.5
  • Recommended path for non-UC source systems is discussed below
  • Even if one-step is possible, for business reasons customers may decide to have several steps, as discussed by Frank Wagner (see above)


Important:
it is relevant for all target systems based on SAP NetWeaver 7.5 (also for possible upcoming SAP Business Suite 7 versions) that the source system has to be already on Unicode.


source_is_UC.png


System conversion is more than a technical SUM procedure

  • Apart from the SUM run (with or without DMO), the procedure includes
    • Maintenance planning with Maintenance Planner (instead of MOpz):

    -  includes checks for AddOns, software components, activated business functions

    -  allows the download of stack.xml and software files like MOpz

    -  Introduction to Maintenance Planner in SCN: http://scn.sap.com/docs/DOC-65243

    • Preparation checks for system, based on SAP Notes
    • Code check
    • post activities during downtime (like reports) to migrate the data from old to new data model
  • The preparation and the data migration requires involvement of colleagues from business


SAP S/4HANA Simple Finance 1503 is a special member of SAP S/4HANA family

  • SAP Simple Finance on premise edition 1503 (fka sFIN 2.0) is a part of the SAP S/4HANA family
  • Still "1503" is slightly different from a technical perspective
    • it is based on NW 7.4 (not 7.5)
    • Usage of Maintenance Planner is not mandatory (but recommended)
    • Transition to "1503" with SUM/DMO including Unicode Conversion is possible
  • Note that "sFIN" is not the official name
  • Note that SAP S/4HANA Simple Finance 1602 will be based on NW 7.5, so source has to be Unicode!


Recommended path for two-step approach: goto 6.17 oH

  • As discussed above, a two-step approach is required: for non-UC systems and for R/3 4.7 & SAP ECC 5.0
  • Recommendation:
    • first step to for SAP ECC 6.0 EHP 7 on HANA (6.17 oH) with SUM w/ DMO
    • second step later then with SUM (w/o DMO) to SAP S/4HANA 1511 ff
  • Exception: DMO not supported for SAP R/3 4.7:
    • use SUM (w/o DMO) to 6.17 on sourceDB
    • later SUM w/ DMO to SAP S/4HANA
  • Alternative for SAP ECC 6.0 EHP 0...7 on anyDB:
    • You may only do the Unicode Conversion without update & migration (stay on software level)
    • Benefit: no change of business processes, so project effort may be reduced
    • Disadvantage: you need additional hardware to support a parallel DB export/import to minimize the Unicode conversion downtime, which would not be required for SUM w/ DMO
  • Discussion:
    • Targeting 6.17 oH as first step is a handy general rule, although exeption exist
    • Targeting EHP8 not possible, because it is based on 7.50 => requires UC source system

 

source_is_nonUC.png

 

Best regards, Boris Rubarth

Product Manager Software Update Manager, SAP SE


Latest SAP Process Integration systems are no longer dual-stack

$
0
0

SAP had removed optional dual stack as useful deployment option already quite some time ago – but so far, only for optional dual-stack setups, while there were still some exceptions for mandatory dual-stack setups even in higher releases, such as with SAP NetWeaver 7.4. There, SAP Process Integration (SAP PI) still is (and will remain) a mandatory dual-stack system and the system provisioning procedures, such as installation and system copy, offer corresponding procedures also to handle those dual-stack systems.

 

With SAP NetWeaver 7.5, this now changes: as of this release, also SAP PI no longer is a dual-stack system, so no dual-stack systems are supported in this release – without exception.

 

This has implications about how you install SAP PI systems with release 7.5 or higher and about how you upgrade to such releases, as outlined below.

 

Installation of SAP PI 7.5 and higher

As a consequence, compared to previous releases, the standard installation of SAP PI 7.5 and higher does no longer install a dual-stack system, but consists of two installation procedures: one for a separate ABAP stack and one for a separate Java stack (besides the option to install the Advanced Adapter Engine Extended, which is based on AS Java only - for more information about SAP Process Integration release recommendations, see SAP Note 1515223). In detail:

  • First, you perform the installation of ‘Application Server ABAP for SAP Process Integration’, as offered by Software Provisioning Manager: there, also Java users for the Application Server Java (AS Java) for the SAP PI system are created and the ABAP system is prepared to get connected to the AS Java system.
    AS_ABAP.jpg

  • Second, you perform the installation of ‘Application Server Java for SAP Process Integration’, also as offered by Software Provisioning Manager: there, the ‘Application Server Java for SAP Process Integration’ system uses the User Management Engine (UME) of the ‘Application Server ABAP for SAP Process Integration‘ system that you must have installed before (as outlined in the previous step).
    AS_Java.jpg

 

Upgrade to SAP PI 7.5 SP1

After upgrading to SAP PI 7.5 SP1, you first have to split the still existing dual-stack system before you can use it productively – for this, the standard dual-stack split procedure that exists for quite some time now also supports SAP PI 7.5 SP1 and higher.

 

For more information about the dual-stack split procedure, see the Dual-Stack Split page in SAP Community Network and the dual-stack split guide available at http://service.sap.com/sltoolset - Software Logistics Toolset 1.0 - Documentation - Software Provisioning - Dual-Stack Split: Systems Based on SAP NW 7.1 and Higher.

 

 

Outlook at SAP Solution Manager 7.2

As you maybe know, also SAP Solution Manager 7.0 and 7.1 systems are mandatory dual-stack systems. With SAP Solution Manager 7.2, this is also planned to be changed. Also for these systems, we plan to extend the support of our reliable dual-stack split procedure. Expect to get more information as soon as the ramp-up of SAP Solution Manager 7.2 has started.

SAP TechEd 2015 Las Vegas Session Replay: SAP Notes and ABAP Add-Ons – What‘s New?

$
0
0

This session covers key enhancements related to the SAP Notes service and ABAP add-ons. Get introduced to transport-based correction instructions, which is a new approach for delivering ABAP code, bridging the gap between support packages and SAP Notes. Learn how to search notes to solve a specific issue based on issue replication with the automated note search tool. Learn how to de-install an ABAP add-on and how SAP Add-On Assembly Kit software has been enhanced to enable this for our partners.

 

Click here to watch the session replay!!

Optimizing DMO Performance

$
0
0

When migrating an existing SAP system to the SAP HANA database using SUM with database migration option (DMO), several ways exist to optimize the performance and reduce the downtime.

 

This blog covers the topics benchmarking, optimization and analysis step-by-step. Therefore you should read and follow the steps in sequence

 

The following graphic gives you an overview of the migration process and will be used below to visualize the performance optimization options.

 

 

 


 

Optimizing the standard DMO performance

 

Preparation steps

 

The DMO uses tables from the nametab. Therefore it is recommended to clean up the nametab before starting a DMO run. Proceed as follows:

 

1.) Start transaction DB02 (Tables and Indexes Monitor) and choose “Missing tables and Indexes”

 

2.) Resolve any detected inconsistencies

 


If you do not perform this step, the DMO run may stop with warnings in the roadmap step “Preparation”.

 

 

Benchmarking Tool

 

Before you start a complete DMO test run, we highly recommend using the benchmarking tool to evaluate the migration rate for your system, to find the optimal number of R3load processes and to optimize the table splitting.

 

Start the benchmarking mode with the following addresses:

 

http://<hostname>:1128/lmsl/migtool/<sid>/doc/sluigui

or

https://<hostname>:1129/lmsl/migtool/<sid>/doc/sluigui


This opens the following dialog box:
  Benchmarking Tool.jpg

 

Benchmark Export

 

Use this option when you want to simulate the export of data from the source system.

Proceed as follows in the dialog box:

 

1.) Select the option “Benchmark migration”

 

2.) Select the option “Benchmark export (discarding data)”
     This selection will run a benchmark of the data export and discard the data read

     from the source database (source DB).


     Note:

a) Always start with the benchmark of the export to test and optimize the performance of your source DB.

Since almost the complete content of the source DB needs to be migrated to the SAP HANA database, additional load is generated on the source DB, which differs from the usual database load of a productive SAP system.
This is an essential part for the performance of the DMO process, since on the one hand parts of the data is already transferred during the uptime while users are still active on the system. On the other hand the largest part of the data is transferred during the downtime. Therefore you have to optimize your source DB for the concurring read access during uptime to minimize the effect on active business users and also for the massive data transfers during the downtime to minimize the migration time.

 

 

b) Always start with a small amount of data for your first benchmarking run.
This will avoid extraordinary long runtimes and allow you to perform several iterations.
The idea behind this is that performance bottlenecks on the source DB can already be found with a short test run while more iterations are useful to verify the positive effects of source DB configuration changes on the migration performance.
However, too short runtimes should also be avoided, since the R3load processes and the database need some time at the beginning to produce stable transfer rates.
We recommend about 100GB or less than 10% of the source database size for the first run.
The ideal runtime of this export benchmark is about 1 hour.


Benchmarking Parameters.jpg

 

3.) Select the option „Operate on all tables“ and define the sample size as percentage of the source database size as well as the size of the largest table in the sample as percentage of the source database size.

 

4.) Also select “Enable Migration Repetition Option”.
This option enables you to simply repeat the migration benchmark without changing the set of tables. This is especially useful for finding the optimal number or R3load processes for the migration.

 

5.) Define a high number of R3load processes in your first test iteration to get enough packages from the table spitting to be able to play around with the number of parallel running R3load processes later on. For detailed information of the table splitting mechanism see the blog DMO: background on table split mechanism

Use 10 times the number of CPU cores available on the SUM host (usually the Primary Application Server) as the number of R3load processes here.
The R3loads for "UPTIME" are used for the preparation (determine tables for export), the R3loads for "DOWNTIME" are used for the export (and import, if selected), so UPTIME and DOWNTIME are no indication for uptime or downtime (concerning the configuration of R3load processes). 

Parallel Process Configuration.jpg

 

 

 

6.) Directly before starting the roadmap step “Execution”, in which the actual data migration will take place, reduce the R3load processes to 2 times the number of CPU cores available on the SUM host.

You can change the SUM process parameters during the run by means of the DMO utilities:

 

 

 

 


 

 

7.) Start the roadmap step “Execution”.
While monitoring your network traffic and CPU load, raise the number of R3load processes step by step, always waiting 10 to 15 seconds until they are started.
When either the CPU load or the network traffic reach 80% to 90%, you have found the optimal number of R3load processes for this system landscape.

 

8.) If you repeat the benchmarking run, avoid database caching.
This can either be realized by flushing the cache or by restarting the database.

 

If you want to change the table set, finish the current benchmarking run and start the test from the beginning. To avoid database caching, you can also select bigger tables that exceed the database cache.

 

 

Benchmark Export + Import

 

Use this option when you want to simulate the export of data from the source system and the import of data into the target system.

After you have executed at least one export benchmark, you can continue with benchmarking the migration export and import in combination. In this way you can find out if your target SAP HANA database is already running at peak performance or if it needs to be optimized for the mass import of migrated data.
The behavior of this combined benchmark is very similar to a real migration run since the exported data is really imported into the target HANA database. Only after a manual confirmation at the end of the migration benchmark the temporarily created database schema is dropped from the target HANA database.

Proceed as follows in the dialog box:

 

1.) Select the option “Benchmark migration”

 

2.) Select the option “Benchmark export and import”

 

 

 

 

Automatically optimize Table Splitting

 

1.) Perform a benchmark migration of the whole database to generate a durations file, which contains the migration runtimes of the most significant tables.

Configuration_complete_run.jpg

 

Set the percentage of the DB size as well as the size of the largest tables to 100% and enable the “Migration Repetition Option”.
On the process configuration screen, input the optimal number of R3load processes, identified beforehand.

 
2.) Repeat the migration phase to run the full migration benchmark again.
This time the benchmarking tool makes use of the durations file from the first full run to automatically optimize the table splitting, which should result in a shorter overall migration runtime.




 

 

 

 

Analysis

 

After a complete migration run, you can analyze the migrated data volume and the migration speed.
The SUM creates a summary at the end of the file ../SUM/abap/log/EUMIGRATERUN*.LOG:

 

Total import time: 234:30:20, maximum run time: 2:31:41.

Total export time: 222:31:49, maximum run time: 2:31:42.

Average exp/imp/total load: 82.0/87.0/168.9 of 220 processes.

Summary (export+import): time elapsed 2:41:40, total size 786155 MB, 81.05 MB/sec (291.77 GB/hour).

Date & Time: 20150803161808 

Upgrade phase “EU_CLONE_RUN” completed successfully (“20150803161808”)

 

In this example
- 220 R3load processes have been used (110 Export, 110 Import)
- the downtime migration phase took 2 hours 41 minutes
- total migration data volume was: 786155 MB (786 GB)
- migration speed was: 81 MB/s (291 GB/h)
- the migration phase ended without issues: “completed successfully”

 

In general, a good migration speed is above 300 GB per hour.

 

 

R3load Utilization

 

In the DMO Utilities, analyze the R3load utilization after a migration run.


1.) Open the DMO utilities and navigate to “DMO Migration Post Analysis -> Charts”.

 

2.) Select the file “MIGRATE_*PROC*”

 

3.) Check for a long tail at the end of the migration, in which only a small number of R3loads still process remaining tables.

 

 

For a definition of this tail and examples for a long and a short tail, see the blog

DMO: background on table split mechanism

If such a long tail is found, analyze the durations file to find out which tables cause it.

 

 

Durations file

 

1.) Open the file SUM/abap/htdoc/MIGRAT*_DUR.XML with a browser to get a graphical representation of the runtimes of the migrated tables.

 

2.) Look for long-running tables at the end of the migration phase.

 

 

In this example, the table RFBLG has a very long runtime. It is running from the beginning of the migration phase until the end.

 

 

R3load logs

 

Analyze the R3load logs to identify the origin of performance bottlenecks of long-running tables.

 

1.) Open the R3load log summary file SUM/abap/log/MIGRATE_RUN*.LOG

 

2.) Search for the problematic tables

 

3.) Analyze the R3load runtimes to identify the origin of the performance bottlenecks.


You will find R3load statistics of the time spend in total (wall time), in CPU in user mode (usr) and in the kernel system calls (sys).
There are separate statistics available for the database and memory pipe of the exporting R3load (_EXP) and the importing R3load (_IMP).

 

#!---- MASKING file “MIGRATE_00009_RFBLG_EXP.LOG”

(STAT) DATABASE times: 1162.329/4.248/0.992 93.6%/36.9%/47.6% real/usr/sys.

(STAT) PIPE    times: 79.490/7.252/1.092 6.4%/63.1%/52.4% real/usr/sys.

 

#!---- MASKING file “MIGRATE_00009_RFBLG_IMP.LOG”

(STAT) DATABASE times: 702.479/213.625/4.896 56.6%/96.6%/86.3% real/usr/sys.

(STAT) PIPE    times: 539.445/7.620/0.780 43.4%/3.4%/13.7% real/usr/sys.

 

In this example the exporting R3load spend 1162 seconds on the source DB reading data.
79 seconds were required to copy the data to the memory pipe.
The importing R3load spent 702 seconds on the target SAP HANA DB to write the data and it spend 539 seconds on the memory pipe waiting for data.

 

Conclusion: In this example the source DB was the bottleneck, because the importing R3load has been waiting for data on the pipe most of the time.
In this case you should ask the administrator of the source DB if he can do a performance analysis of this table.

 

 

 

Extended Analysis

 

If you still experience low migration speeds, an extended analysis of the following factors during a migration run might help to find bottlenecks:

 

CPU Usage

As already mentioned in the R3log analysis example, the R3loads usually wait for the database most of the time while the actual processing of the data only takes a small amount of time.
Therefore it should not happen, that the R3load processes use more than 90% of the CPU time on the application server. If this is the case, either reduce the number of R3load processes or equip the server, on which SUM is running (usually the application server), with more CPUs, if feasible.

 

 


 

Memory Usage

Analogous to the CPU usage on the server where SUM is running, enough main memory should be available for the R3load processing.
Otherwise the operating system will apply paging mechanisms that significantly slow down the migration performance.
The minimum memory usage of a single R3load process during the migration of a standard table is about 60 MB.
Especially when declustering is necessary (for target releases 7.40 and higher), the memory required by R3load is very content dependent.
Therefore it makes sense to monitor the actual memory usage during a complete test migration run to determine the optimal memory configuration.

 

 


 

Disk I/O

The performance of export and import operations on the source and target DB is depending on a good disk input/output (I/O) performance. Therefore it might be necessary to postpone activities which create heavy disk I/O (such as backup jobs) during the migration run.

Sometimes it is not obvious, which activities create disk I/O and have a negative impact on the DMO migration performance.
In this case it might be useful to actively monitor the disk I/O during a test migration to pinpoint the timeframe of problematic activities.

 

 

 

 

Network

The network can also be a bottleneck, therefore it is recommended to monitor the throughput of the different network connections (from PAS to source DB, from PAS to target SAP HANA DB) during a migration run.
Theoretically this should not be a major issue with modern LAN networks. Even an 1 Gbit LAN would already deliver an expected transfer rate of ~100 MB/s. Therefore a low throughput can be an indicator for an unfavorable setup for the migration (e.g data flow through two firewalls).
It also has to be considered, if parallel migrations of different systems or other activities that use network bandwidth, are planned.

 

 

 

 

Remove the bottlenecks

 

Depending on the results of your analysis there may be various ways to deal with the bottlenecks found.
If a more powerful machine is required for the R3load processes, it might be an option to run the SUM on a powerful Additional Application Server (AAS) instance with free resources.
In general, SUM and SUM with DMO may be executed not only on the Primary Application Server (PAS), but also on an Additioal Application Server (AAS). However, running SUM with DMO on an AAS is only supported if your system has a separate ASCS instance.

It might be even possible to use the SAP HANA Master Node for this purpose, especially if the network connection to the SAP HANA database is the bottleneck.

 

 

Housekeeping

 

Especially when performing a SAP BW migration, the positive impact of housekeeping tasks like cleaning up the persistent staging area (PSA), the deletion of aggregation tables and compression of InfoCubes should not be underestimated.

 

For details regarding the SAP BW migration using DMO see the document:
SAP First Guidance - Using the new DMO to Migrate BW on HANA

 

But even with a standard DMO you should give some thought to housekeeping before starting the migration. For example, it might be an option for you to delete or archive old data that is not accessed frequently anymore (analogous to moving BW data to Near-Line Storage) before starting the DMO migration. This data does not need to be transferred, which reduces the migration runtime, and it does not need to be stored in-memory on the target HANA database.

 

 

Table Comparison

 

After you have optimized the DMO migration using the benchmarking tool, you are ready for the first test migration run.
You have now the option to let SUM compare the contents of tables on the target database with their respective content on the source database to make sure that everything has been migrated successfully.

 

 

We recommend to switch on the table comparison for all tables in the first test run only.
The reason is that the full table comparison via checksums takes a lot of time, usually as long as the table export itself.
If no errors are found, keep the table comparison off (“Do not compare table contents”) or compare only single, business critical tables in the productive DMO migration run.
This will minimize the Downtime in the productive run.
In fact, even when "Do not compare table contents" is selected, the SUM still compares the number of rows of the migrated tables on the target database with the number of rows on the source database after the migration of their content.

 

For further information regarding the DMO table comparison see DMO: table comparison and migration tools

 

 

Downtime Optimization

 

If the performance of the standard DMO is still not sufficient after all optimization potential has been utilized (usually a migration speed of up to ~500 GB/h can be reached) and the downtime needs to be significantly shorter, additional options to minimize the downtime are available.

 

 

 

 

Downtime optimized DMO

 

The Downtime optimized DMO further reduces the downtime by enabling the migration of selected application tables during the DMO uptime.

The report RSDMODBSIZE (available with SAP Note 2153242) determines the size of the largest tables in a SAP system and gives an estimation about the transfer time required for these tables in the DMO downtime.
Tables transferred with Downtime optimized DMO in the DMO uptime effectively reduce the downtime.
The report facilitates the decision if the usage of Downtime optimized DMO is suitable and generates a list of tables as input for SLT.

 

RSDMODBSIZE.jpg

 

The following blog post describes this technology, prerequisites and how to register for pilot usage of the Downtime optimized DMO:

DMO: downtime optimization by migrating app tables during uptime (preview)

 

Note that the Downtime optimized DMO works for SAP Business Suite systems, but not for SAP BW.

 

 

 

BW Post Copy Automation including Delta Queue Cloning

 

To minimize the migration downtime of a productive SAP BW system, one of the recommended migration paths from SAP BW to SAP BW on SAP HANA comprises a system copy of your SAP BW system.
To keep things simple, SAP offers the Post-Copy Automation framework (PCA) as part of the SAP Landscape Virtualization Managementwhich includes post-copy automation templates for SAP BW as well as an automated solution for delta queue cloning and synchronization, enabling the parallel operation of your existing production system.

 

 

 

In combination with the SUM DMO the production downtime of the migration from SAP BW to SAP BW on SAP HANA can be kept at a minimum. The usage of the delta queue cloning solution requires additional steps to be performed before the standard SUM DOM is started.

 

 

 

 

For further information about the downtime-minimized migration process of SAP BW using Post-Copy Automation with delta queue cloning see the following links:

 

TechEd 2014 - Migration to BW on HANA - Best Practice

SAP First Guidance - Using the DMO Option to Migrate BW on HANA

SAP First Guidance - BW Housekeeping and BW-PCA

Three things to know when migrating SAP BW on SAP HANA

Easier Migration to SAP BW powered by SAP HANA with ABAP Post-Copy Automation for SAP Business Warehouse (SAP BW)

Post-Copy Automation

DMO = Do Math Obligations!

$
0
0

Do not complain about insufficient migration rate if you use a 1 GBit network card!

Do your Mathematic Obligations and check your network for performance prior to DMO.


Boris Rubarth

Product Management Software Logistics, SAP SE




P.S. Sorry, yes, the blog above is short and rude, but I had to draw your attention on the network side of housekeeping:

You have to check the network performance before starting with DMO: check the througput between source DB and PAS (on which SUM and R3load processes are running), and between PAS and target DB. One network tool that I am aware of is iPerf: "iPerf3 is a tool for active measurements of the maximum achievable bandwidth on IP networks."  (http://iPerf.fr). But even ftp can provide a first insight: transfer a large file, and check the throughput. Measurements like this can detect an wrong configuration of network cards, a transfer limitation due to firewalls, or other hurdles.

And network card size matters! Using a 1 Gbit network card means a maximum throughput of 439 GB / hour theoretically, practically ~ 350 GB / hour. [The math is: 1 Gbit card means 1000 MBits per second, this is 125 MByte per second, this is 429 GB per hour.]  DMO can do more, if you let it  ... see Optimizing DMO Performance So you should rather use a 10 Gbit network card.

Feel free to add your favorite network tool as comment for this blog.

Make SUM 1.0 SP 14 / 15 work for you

$
0
0

In this blog I would like to share our experience with SUM 1.0 SP 14 / 15, that are working significantly differently, and in many ways more sophisticated, than previous releases up to SP 13.

 

In fact, Boris Rubarth'scontinuousness blogs about SAP Software Logistics, and especially SL Toolset 1.0 SPS 14: improved Software Logistics Tools and SUM 1.0 SP 14 gets a buddy called SP 15 have helped us tremendously.

 

To start with, SUM 1.0 SP 14 / 15 is invoked differently from the previous releases up to SP 13. You still have to be logged on as an administrator, preferably SIDadm, but the command has changed to


.\startup confighostagent SID:

confighostagent.png

In return, you will be presented with the URLs you have to use to log into the tool, specifically:

  • SUM Java: https://hostname:1129/lmsl/sumjava/SID/index.html
  • SUM ABAP: https://hostname:1129/lmsl/sumabap/SID/doc/sluigui
  • SUM Dual stack: https://hostname:1129/lmsl/sumjava/SID/dual.html
  • SUM benchmark tool: https://hostname:1129/lmsl/migtool/SID/doc/sluigui

However, it does not tell you, that these URLs are different if you are not on HTTPS:

  • SUM Java: http://hostname:1128/lmsl/sumjava/SID/index.html
  • SUM ABAP: http://hostname:1128/lmsl/sumabap/SID/doc/sluigui
  • SUM Dual stack: http://hostname:1128/lmsl/sumjava/SID/dual.html
  • SUM benchmark tool: http://hostname:1128/lmsl/migtool/SID/doc/sluigui

sumjava.png

Doing so, you will be presented with a nice, fresh and crisp SUM HTML5 user interface:

Process Execution.png

The new UI gives you access to the so far more hidden upgrade steps:

TASK LIST.png

As well as to the logs:

LOGS.png

Further more, it allows to set breakpoints prior to any execution step! How cool is that?!

BREAKPOINTS.png

Viewing all 40 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>