2. Medium Scale Enterprise Systems

2.1. System Scenario

This document describes exemplary solution implementations for a “Medium Scale Enterprise System” scenario. Medium scale enterprise systems mainly fall under the categories of “Business Systems” and “Finance and Accounting Systems” for internal company users, and are applied to systems with the following features.

  • Normal operation is available for 24 hours of 365 days a year, and is basically performed by one-by-one fall back. (Except in redundant configuration related maintenance where suspension of all servers is required.)
  • SPOF (i.e., Single Point of Failure) is not allowable. (Requires physical separation will be prescribed for redundant configuration.)
  • RPO (i.e., Recovery Point Objective) is so implemented at a set goal with 0. (For special failures, such as loss of REDO log space, returns to the time of backup in units of several hours.)
  • As this is a critical system, requires DR (i.e., Disaster-Recovery) measurements for employees in separate locations to be able to utilize even in disaster.

2.2. System Configuration

This system scenario places DR measures to be configured from the two (2) locations which are distant from one another in between the Main Site and DR Site. In the following description, Customers will receive two (2) system configurations: Main Site System Configuration and DR Site System Configuration.


2.2.1. Main Site Configuration

The main site accommodates the enterprise system during normal operation times, and configurations similar to the following can be assumed.
  • Server system configuration allowing Web-AP/DB servers to coexist on 1 Virtual Server.
  • Production/Testing/Development environment operation, including additional creation of testing configurations as necessary.
  • Archive storage for long-term data storage.
  • In order to obtain sufficient processing time for nightly batches, the database utilizes storage functionality so they complete backup processes within few minutes.

Below is a diagram outlining the system configuration.
Main site

The main points of the configuration are described in the following sections.


2.2.1.1. DBMS Configuration

DB servers will be built in one (1) unit of Virtual Server configured with Web-AP servers. Customers can start subscribing Baremetal Server Service at anytime they so wish, so that they do not need to be reminded of the dedication time frame so as for the Customers themselves to start configuring.
The following diagram gives an overview of a case where vShere environment configured on Baremetal Server. Installing ESXi unto the Baremetal server, VMware VM environment can be built by mounting block storage created for DataStore. Additionally, through integrated control of these ESXi hosts from the vCenter Server, the Customer’s independent vSphere environment can be configured. Through this, the traditional vSphere environment that procured/built on-premise server and storage devices can be quickly implemented as a private cloud environment.
DBMS
Web-AP/DB servers operate as a Virtual Server on the virtual platform, and the Web-AP/DB server system space and AP module will be stored on the Virtual Server internal disk (virtual disk).
When considering the protection of Web-AP/DB server system space and AP module, improvement of system availability/reliability can be achieved through utilizing HA functionality (vSphere HA etc.) provided by virtual platforms and live migration functionality (vMotion etc.). Host redundancy level (N+1 configuration, N+2 configuration etc.) and physical separation of hosts operating Virtual Servers can also be adjusted on the Customer-side according to system importance. This example takes a configuration where the production environment is physically distributed from the testing/development environment through separate Baremetal Servers with an additional standby Baremetal server as a fail-over host.
For storage plane, as Baremetal servers possess multiple physical NICs, through configuring the ESXi iSCSI multipath, path redundancy can be secured. Additionally, Web-AP/DB server system space and AP module backup can be obtained through utilizing snapshot and template cloning functions provided by virtual platforms as well as backup solutions that support VADP (vStorage APIs for Data Protection).
Moreover, at the creation of Logical Network, Customers can select from two (2) choices; i.e., Data Plane and Storage Plane. Storage Plane of Logical Network is thoroughly dedicated for transmissions in between the Baremetal servers and storage and is physically separated in configuration from Data Plane of Logical Network when dedicated. Because of such, Storage Plane can be stably communicated for transmissions as the plane will not be affected by the high load of traffic on Data Plane. By utilizing Storage Plane of Logical Network for transmissions in between the Baremetal server and LUN, Customers can stabilize the storage use even whilst the data transmission gets higher in loads.
As a note, Web-AP/DB server data space is stored on a LUN separate from that for Data Store purposes. Data space protection will be discussed in later sections.

Note

It is required for Customers to perform design, build, and operations tasks for virtual platforms such as Hyper-Visor (ESXi etc.) and servers for management (vCenter etc.). Additionally, the Customer is required for procurement/management of software licenses for the Guest OS, middleware, and applications etc. running on the Virtual Server.


2.2.1.2. Database Storage Configuration

Web-AP/DB server data space is stored onto a LUN provided by block storage. This LUN is used by mounting through an iSCSI initiator running on the VMware’s Guest OS. Please note that this differs from previously mentioned method where a LUN for Data Store purposes is mounted from an ESXi host.
A LUN can be configured to differ for each file purpose to comprise the database. The block storage Group is used when configuring the LUN. Group is the concept for distinguishing equipment groups where block storage housing is located. It can be ensured that 2 LUNs are configured on separated physical devices by creating each LUN from block storage in separate Groups. With this configuration, LUN is configured on separate Groups and support for physical failures can be implemented.
Database storage

Data File
Data file features generation of random small IO and requires greater performance with DB scale increase. From a performance management standpoint, existing efficiency of 2IOPS/GB for LUN, which is demonstrated in IO performance when varying according to Web-AP/DB server workload.
Regarding volume management, it is a considerable scenario where LUN volume is drained due to data enlargement that accompanies system growth. In this case, support for volume depletion can be achieved by adding/mounting a second larger LUN and performing data migration using volume management functions such as DBMS data file migration or Oracle ASM.
Transfer data file

Note

Once a LUN has been configured, its volume cannot be expanded. For example, a LUN created at 100GB cannot be later expanded to 250 GB.

Regarding data protection, it is necessary to consider support for both logical and physical (media/device) failures.
For logical failures, the snapshot function provided by block storage can be utilized.block storage, as a storage array-side function, can obtain snapshots while online for each LUN. Through this, data can be recovered from up to the point of the snapshot even when a logical failure occurs. In actuality, a snapshot can be obtained in a recoverable status by coordinating processes for a database online backup mode (ensuring rest point etc.) and API job tools and scripts for block storage snapshot in order to ensure data integrity/consistency.
Support for physical failures can be implemented through backups, as described in later sections.

Transaction Log File
Transaction log data placement is also important to databases for preparation against failures. Here, a example storage design is described for management of Oracle REDO logs files.
For online REDO log files, it necessary to perform storage and multiplexing for a separate disk space in order to support physical failures. With this, a REDO log group will be configured across LUN comprised on separated Groups, and the online REDO log file will be placed and distributed to LUN on separate storage housing.
Transaction logs

Note

The REDO log file requires storage performance. Depending on the log file size, LUN sizing consideration will be necessary not only to volume requirements, but also performance requirements.


Backup data should be set to a data base within different physical disk equipment from the physical disk equipment which Customers usually utilize in order for Customer to correspond even in the event a cloud equipment failure / error. For this System, DB server utilizes DBMS function for output data to be designated to a different Block Storage to successfully separate physically.
Backup data

Control File
The control file must be placed and distributed to multiple disks. With this, physical failures can be avoided for storage through multiplexing on block storage or volumes (internal disks) attached to Virtual Servers to store the control file.

Backup File
When performing database backup using tools such as RMAN, block storage can be physically distributed and the inability to access the backup file when failures occur can be avoided by assigning the backup output destination to a volume (internal disk) attached to a Virtual Server.
DB backup

2.2.1.3. Testing Environment Build and Production Data Replication

When troubleshooting issues, there may be cases where not only a regular testing environment is utilized but also an additional testing environment must be prepared. An environment can be quickly built for even these types of scenarios.
For Virtual Servers running on a Baremetal server with Web-AP/DB servers for testing purposes, replication of Virtual Servers from the permanent testing environment to a temporary testing environment can be performed by utilizing virtual platform template/cloning functions. Similarly, Virtual Servers for management purposes etc. running on a Virtual Server Service can be replicated from the permanent testing environment to a temporary testing environment using Virtual Server imaging functions provided by the Virtual Server Service.
Additionally, there may be cases where a testing environment should be built for troubleshooting using production environment data as-is.

System space and AP module can be replicated on the testing environment by utilizing production environment Baremetal server’s cloning functions and / or Virtual Server Service functions for imaging of internal disks for data storage.

In contrast, regarding DB data storage, a block storage LUN holding the DB data file cannot be directly created into an image. It is supported in these cases by extending the logical network from the production environment to the testing environment. Using this function, DB data can be exported to a file server etc. via the logical network that has been temporarily extended. Next, by importing DB data from the AP/DB server that was replicated production data can be salvaged. Through this, the same DB contents can be replicated to the testing environment in a short time.
clone

2.2.1.4. Archive Data Storage

Long-term storage of archive data is required for internal critical systems such as enterprise systems. In order for archive data to be accumulated daily, low cost storage for large volumes is the requirement, not performance factors such as throughput and IOPS. Utilization of Biz Simple Disk or Cloudn Object Storage provided by Service Provider is suitable for PB (petabyte) level large volume data storage.
Biz Simple Disk provides NFS volumes accessible via the Universal One VPN service. With this feature, access can be secured not via the Internet but by transmissions closed to private IPs, even in cases where NFS mounting from archive servers is performed.
Cloudn Object Storage provides storage at a cost lower than Biz Simple Disk, but uses an Internet connection. Additionally, Cloudn Object Storage provides Amazon S3 compatible REST API functions. With this feature, mounting as a file system can be performed by utilizing tools such as s3fs and when viewing from the archive server, Cloudn Object Storage can be shown as if it were 1 file system. Through this, data can be stored on Cloudn Object Storage with no inconvenience.
Furthermore, Customers can sign for multiple subscriptions for one (1) Tenant along with Internet Gateways and VPN Gateways. By separately subscribing servicing Internet Gateways, VPN Gateways and Cloudn Object Storage and Biz Simple Disk Internet Gateways and VPN Gateways, Customers can prevent from getting affected from situations where high load of transmissions to store high-laden capacities affect Customers’ servicing transmissions or vice versa.

Note

Cloudn Object Storage、Biz Simple Disk are all available only within domestic Japan.

2.3. DR Site Configuration

It will be necessary for the system to switch over to the DR site should a disaster occur at the main site.
When planning a DR configuration, it should include proper transfer of main site generated production data to the DR site during regular operating times. During a disaster, site switch over is performed by restoring the transferred data at the DR site. Through this, task recovery/continuity can be quickly achieved even during disasters.
Below is a diagram outlining the system configuration.
DR site

The main points of the configuration are described in the following sections.


For archive servers and batch servers configured using Virtual Server Service, Virtual Server volume (system disk and data disk) can be stored to an image storage space (called Glance) by utilizing imaging functions. The volume image created can be imported/exported using API/GUI functions. As a note, these tasks must be performed via the Internet.
Using the above function, volume images will be routinely created at the main site and once exported, will then be imported to the DR site. By shortening the intervals of the task sequence, RPO can also be shortened.
By utilizing the imported volume image during disaster, Virtual Servers can be quickly created at the DR site and RTO can be shortened.

Note

Volume images cannot be exported in cases where the software license service is associated with the volume image.

Volume image copy

VM running in operation on a Baremetal Server is able to utilize DR solutions provided by the virtual platform. Through this, VM protection can be implemented for those running on the Baremetal Server. For example, in the vSphere environment Hyper-Visor-level replication between DCs can be implemented using vSphere replication.

Note

Block storage configure for DataStore does not have a LUN snapshot transfer function. As such, storage array based replication for VMware vCenter SRM (i.e., Site Recovery Manager) is not supported.


Traffic for replication between DCs can utilize Arcstar Universal One or the Internet, but in order to avoid sharing with VPN or Internet traffic for task purposes, usage of 10Gbps networks between DCs is recommended. Although the 10Gbps networks between DCs are provided at best effort, they can be utilized free of charge.

Note

Customers are required to design, build, and operations tasks for Site Recovery with the virtual environment configured over Baremetal Server.

Recovery
However, when mounting LUN located on block storage for Baremetal Server’s Site Recovery configuration, similar to Web-AP/DB servers, the data on the LUN will not be applicable for protection. As such, in these cases a Virtual Server should be kept on running in operation regularly at the DR site and a different sync method should be implemented for LUN data, as described later.

Web-AP/DB server data storage is stored on the block storage LUN. DR measures can be implemented by always syncing this DB storage space (using Oracle Data Guard etc.) between main site and DR site Web-AP/DB servers. The 10Gbps networks between DCs can be utilized free of charge for forwarding of traffic for syncing.

DB mirroring

By using these external services for archive data storage, archive data lose can be prevented even when a site failure occurs as the main site. Furthermore, archive data storage will proceed either through UNO for Biz Simple Disk and Internet connection for Cloudn Object Storage.

Note

Cloudn Object Storage、Biz Simple Disk are all available only within domestic Japan.


When migration and restore processes are completed for data at the DR site, it will be necessary for to switch access paths for users. Especially for Enterprise system where accessing through VPN in user scenario, VPN access path should be required to be changed accordingly without any difficulties. For such switch-overs, there are two (2) patterns for Customers to utilize as below:

  • By regularly designating private IP Addresses anew to respective nodes on the DR Site, VPN should be connected from the beginning. At the time of DR, internally utilized DNS record will be modified and user connection will be switched over safely from main site to recovery site. With switching over private IP Addresses to ensure such actions, DR Zone files and change scripts should be prepared beforehand regularly as daily routines, which will let Customers proceed much more smoothly.
  • In the other patterns, the replica configuration is deployed by Customer, preparing for such event as DR even prior to such will occur; therefore, Customers can smoothly switch over to the other side by designating private IP Addresses on DR site (just as service will) with the main site service. At this point, difficulty on system transmissions as route is not clarified on DR site, so such actions will not arise any risks even though concurrent IP Addresses are designated on both of DR and Main Sites. At the time of disaster or abnormal failures on Main Site, the well-prepared API will be utilized and that will open the route, which will guide DR Site’s new-coming transmissions.

Furthermore, Internet transmissions and 10Mbps Menu (at Best Effort) should be either free of charge or if ever charged that will be less than market price, so such can be utilized without much of financial burdens on the end of Customers, so the Internet connectivity beforehand is required. Moreover, Customers can ensure their global IP Addresses (these are commercially priced, so it is not really free) with parameters including global IP Addresses. By doing so, Customers can create such switch-over procedures much easier and simpler orders when the disaster recovery does occur in real time.