Horizontal and vertical scaling in the letograph. The functionality of the information system for managing the logistics of a network retail company. The scalability of the management information system assumes

Scalability is the system’s ability to adapt to expanding requirements and increasing volumes of tasks to be solved.

Operation of one application solution in different conditions

The 1C:Enterprise 8 system has good scaling capabilities. It allows you to work both in the file version and using the client-server technology.

  • Personal use, file version of work
    When working in the file version, the platform can work with a local infobase located on the same computer on which the user is working. This work option can be used at home or when working on a laptop.
  • Small working group, file version of work
    The file option also provides the ability for several users to work on a local network with one information base. This way of working can be used in small work groups and is easy to install and operate.
  • Large enterprise, client-server version of work
    For large workgroups and on an enterprise scale, a client-server version of work can be used, based on a three-tier architecture using the 1C:Enterprise 8 server and a separate database management system. It provides reliable data storage and efficient processing when a large number of users work simultaneously.
  • Holding, distributed information base
    Large holding companies can use work in a distributed information base, combined with the use of both file and client-server work options. A distributed information base allows you to unite divisions of a holding company that are remote from each other, and each of these divisions can use, in turn, file or client-server options. The mechanism of the distributed information base will ensure the identity of the configurations used in each of the divisions of the holding and exchange data between the individual information bases included in the distributed system.

It is important to note that the same application solutions (configurations) can be used in both file and client-server modes of operation. When moving from the file version to the client-server technology, there is no need to make changes to the application solution. Therefore, the choice of work option depends entirely on the needs of the customer and his financial capabilities. At the initial stage, you can work in the file version, and then, as the number of users and the volume of the database increases, you can easily switch to the client-server version of working from your information base.

Multi-user work

One of the main indicators of system scalability is the ability to work effectively with an increase in the number of tasks to be solved, the volume of data processed and the number of intensively working users:

The client-server version provides the ability for a large number of users to work in parallel. Tests show that as the number of users increases, the speed of document entry decreases very slowly. This means that as the number of intensive users increases, the response speed of the automated system remains at an acceptable level.

In the data model supported by the 1C:Enterprise 8 system, there are no database tables that clearly lead to concurrent access by several users. Concurrent access occurs only when accessing logically related data and does not affect data that is not related to each other from the point of view of the subject area.

When performing routine operations, situations where the installation of an exclusive mode is required to start work in a certain reporting period are excluded. Routine operations can be performed at times convenient for users and the organization. Exclusive mode is not installed at system startup, but at the moment when it is necessary to perform an operation that requires it to be enabled. After performing such operations, exclusive mode can be disabled.

Optimization Mechanisms

The 1C:Enterprise 8 technology platform contains a number of mechanisms that optimize the speed of application solutions.

Execution on the server

In the client-server version, the use of the 1C:Enterprise 8 server allows you to concentrate on it the most extensive data processing operations. For example, when executing even very complex queries, the program running for the user will receive only the selection it needs, and all intermediate processing will be performed on the server. Typically, increasing server capacity is much easier than upgrading the entire fleet of client machines.

Data Caching

The 1C:Enterprise 8 system uses a mechanism for caching data read from the database when using object technology. When accessing an object attribute, all object data is read into a cache located in RAM. Subsequent calls to the details of the same object will be sent to the cache, and not to the database, which significantly reduces the time spent on retrieving the necessary data.

Working of the built-in language on the server

When working in the client-server version, all work of application objects is performed only on the server. The functionality of forms and command interface is also implemented on the server.

The server prepares the form data, arranges the elements, and records the form data after changes. The client displays a form already prepared on the server, enters data and calls the server to record the entered data and other necessary actions.

Similarly, the command interface is formed on the server and displayed on the client. Also, reports are generated entirely on the server and displayed on the client.

Versions 8.1 and 8.0 - comparison of performance and scalability

To assess how the performance and scalability of the system has changed under various conditions, a number of tests were carried out in version 8.1:

  • Assessing system performance and scalability when a large number of users work simultaneously
  • Assessing system performance and scalability under peak loads
  • Performance assessment for certain types of operations

The obtained indicators for 1C:Enterprise 8.1 were compared with similar indicators

for 1C:Enterprise 8.

Versions 7.7 and 8.0 - comparison of performance and scalability

To evaluate the performance and scalability of the client-server version of 1C:Enterprise 8, a number of tests were carried out, allowing:

  • compare and show the advantages of 1C:Enterprise 8 in standard operating modes;
  • evaluate the scalability of 1C:Enterprise 8 with increasing load intensity and growing volume of processed data;
  • evaluate the scalability of 1C:Enterprise 8 while increasing the computing resources of the equipment used;
  • evaluate the performance and performance of 1C:Enterprise 8 when operating under peak load conditions;
  • evaluate the effectiveness of using multiprocessor platforms to solve 1C:Enterprise problems 8.

Assessing the scalability of the Manufacturing Enterprise Management solution

Testing was carried out to assess the scalability of the Manufacturing Enterprise Management (PEM) application solution with the simultaneous operation of a large number of users.

When conducting the test, generally accepted approaches to assessing the performance of corporate information systems were used:

  • Use for testing a typical application solution.
  • Testing operations that are most critical from the point of view of the operation of a typical organization.
  • Testing operations under fixed parameters, typical for most organizations
  • Software simulation of typical work scenarios for system users, creating a load that significantly exceeds the load created by real users
  • Using the volume of business transactions reflected in the system per unit of time and the average time to complete an operation as the main indicators.

Examples of technological parameters for implementing the "Manufacturing Enterprise Management" solution

This section publishes detailed information about the technological parameters of the implementation of “1C:Enterprise 8. Manufacturing Enterprise Management” at enterprises of various sizes and profiles of activity.

The purpose of this section is to familiarize IT specialists with data on actually used equipment and with examples of the load of real 1C:Enterprise 8 implementations.

This information may also be useful for users of all programs of the 1C:Enterprise 8 system.

Equipment selection

This document provides information on how equipment characteristics affect the efficiency of using the system in various modes and provides recommendations for selecting equipment depending on the tasks being solved.

1C:Performance Management Center - performance monitoring and analysis tool

1C:Performance Management Center (1C:PMC) is a tool for monitoring and analyzing the performance of information systems on the 1C:Enterprise 8 platform. 1C:PMC is designed to evaluate system performance, collect detailed technical information about existing performance problems and analyze this information for the purpose of further optimization .

1C:TestCenter - load test automation tool

1C:TestCenter is a tool for automating multi-user load tests of information systems on the 1C:Enterprise 8 platform. With its help, you can simulate the operation of an enterprise without the participation of real users, which allows you to evaluate the applicability, performance and scalability of an information system in real conditions.

Implementation of corporate information systems on the platform
1C:Enterprise 8

Experience in implementing application solutions on the 1C:Enterprise 8 platform shows that the system allows you to solve problems of varying degrees of complexity - from automating one workplace to creating enterprise-scale information systems.

At the same time, the implementation of a large information system imposes increased requirements compared to a small or medium implementation. An enterprise-scale information system must provide acceptable performance in conditions of simultaneous and intensive work of a large number of users who use the same information and hardware resources in a competitive mode.

Oleg Spiryaev

Recently, there have been frequent claims that mid- and high-end servers are being actively replaced by groups of entry-level servers, united in racks or clusters. However, some experts disagree. Thus, according to Dataquest, the share of models priced at $500 thousand and above (this includes mid-range and high-end SMP servers) in total server sales from 2000 to 2002 increased from 38 to 52%.

Other data obtained by IDC indicates growth (at least in terms of the number of machines) in the sector of low-end server models - with two processors. IDC also predicts that in 2005 the most common operating system for servers costing between $50,000 and $3 million will be Unix. Comparing this data, it is clear that mid-range and high-end Unix servers will remain the predominant platform for data centers, but will be complemented by a growing number of smaller (usually dual-processor) servers.

This trend has emerged as a result of the separation of different layers of computing in data centers (Fig. 1). Tier 1, or the front tier, gradually shifts to a scale-out model of small servers, while Tier 3 (the database tier) is dominated by scale-up servers. Layer 2 (application layer) becomes the area where vertical and horizontal architectures coexist.

Vertical and horizontal architectures

Let's look at the main differences between vertical and horizontal architectures. Scale-out servers are large SMP (symmetric multiprocessing or shared memory) systems with more than four central processing units. They use only one copy of the OS to control all processors, memory, and I/O components. Typically, all of these resources are housed in one rack or cabinet. These servers interconnect over a high-speed central or backplane with low latency and cache-coherent access. You can add resources by installing additional system boards inside the cabinet. In vertical architecture systems (or SMP systems), memory is shared, meaning that all processors and I/O components have access to all memory. The user "sees" memory as a single large object.

In alternative, horizontal scaling, systems are connected via a network or clustered together. Interconnects typically use standard network technologies such as Fast Ethernet, Gigabit Ethernet (GBE), and Scalable Coherent Interconnect (SCI), which offer lower throughput and higher latency than vertical systems. Resources in this case are distributed among nodes, usually containing from one to four processors; Each node has its own processor and memory and can have its own I/O subsystem or share it with other nodes. Each node runs a separate copy of the OS. Resources are expanded by adding nodes, but not by adding resources to a node. Memory in horizontal systems is distributed, that is, each node has its own memory that is directly accessed by its processors and I/O subsystem. Accessing these resources from another node is much slower than from the node where they are located. In addition, with a horizontal architecture, there is no consistent access between nodes to memory, and the applications used consume relatively few resources, so they “fit” on a single node and do not need consistent access. If an application requires multiple nodes, it must provide consistent memory access itself.

If a horizontal system meets application requirements, then this architecture is preferable because its acquisition costs are lower. Typically, the acquisition cost per processor for horizontal systems is lower than for vertical systems. The difference in price is due to the fact that vertical systems use more powerful RAS (reliability, availability, serviceability) features, as well as high-performance interconnects. However, there are a number of restrictions on the use of systems with horizontal architecture. Below we will discuss under what conditions horizontal systems can be used and when vertical scaling is necessary.

In addition to one large SMP server, vertical architecture also includes clusters of large SMP servers used for a single large-scale application.

Recently introduced modular or blade servers on the market, usually equipped with one or two processors, are an example of horizontal servers. Here the cluster consists of small nodes, each of which has an entry-level SMP server with the number of central processors from 1 to 4.

Another way to scale out is through large massively parallel computing (MPP) systems, which consist of many small processors installed in a single cabinet, each with its own copy of the OS or a copy of the OS microkernel. Currently, only a few MPP systems are produced, which most often represent specialized solutions. These are, for example, Terradata systems manufactured by NCR, IBM RS/6000SP (SP-2) and HP Tandem non-stop.

Table 1. Features of vertical and horizontal architectures

Parameter Vertical systems Horizontal systems
Memory Large shared Small dedicated
Streams Many interdependent threads Many independent threads
Interconnections Tightly coupled internal Loosely coupled external
RAS Powerful RAS single system Powerful RAS using replication
Central processing units Many standard Many standard
OS One copy of the OS for many central processors Several copies of the OS (one copy for 1-4 processors)
Layout In one closet Placing a large number of servers in a rack
Density High processor density per unit floor area
Equipment Standard and specially designed Standard
Scaling Within a single server chassis On a multi-server scale
Extension By installing additional components on the server By adding new nodes
Architecture 64-bit 32- and 64-bit

Table 1 allows for a comparative analysis of vertical and horizontal architectures.

  • Vertical systems share memory and provide consistent cache access.
  • Vertical systems are ideal for task flows that need to communicate with each other.
  • Vertical systems are characterized by powerful RAS functions, and in horizontal systems, availability is implemented using massive replication (several nodes are connected to a cluster, so the failure of one of them has little impact on the operation of the entire system).
  • In vertical systems, one copy of the OS covers all resources. Some vertical systems, such as Sun Microsystems' midframe and high-end servers (Sun Fire 4800 to Sun Fire 15K), can be divided into smaller vertical servers.
  • Vertical systems use as many standard components as possible, but some key components (such as interconnects) are specially designed.
  • Vertical systems can be expanded by installing additional components into the existing frame (more powerful processors, additional memory, additional and higher-performance I/O connections, etc.). Horizontal systems are expanded by adding a node or replacing old nodes with new ones.
  • Almost all vertical systems are 64-bit, while horizontal systems can be either 32-bit or 64-bit.

Vertical systems are better suited for some types of applications and horizontal systems for others, but in many cases the optimal choice of architecture depends on the size of the problem. In table 2 shows examples of applications for which vertical or horizontal architecture is optimal.

Table 2. Types of applications for vertical and horizontal architectures

Small and modular servers are well suited for applications that are stateless, small in scale, and easily replicated. And for applications that use state information and large volumes of data that require intensive data transfer within the system, vertical servers are the ideal solution.

In the high performance technical computing (HPTC) market, there are many applications in which threads depend on each other and exchange data with each other. There are also applications that require large amounts of shared memory. Large SMP servers are best suited for these two types of applications.

However, there are also HPTC applications in which the execution threads are independent and do not require large amounts of shared memory. Such applications can be partitioned, making clusters of small servers ideal for running them. Likewise, some commercial applications are partitioned and benefit from horizontal servers, while others cannot be partitioned so vertical servers are the best platform for them.

Factors Affecting Performance

Processors are certainly an essential component, but they only partly determine the overall performance of a system. It is more important to ensure that processors are running at maximum capacity. A powerful processor that is only 50% loaded will perform worse than a slower processor that is 80% loaded.

In addition, as the number of processors in a parallel system increases, system interconnects rather than processor power come to the fore. They are responsible for moving data from disk, from memory and from the network to the processor. In a cluster, the interconnect is a network connection, such as Fast Ethernet or Gigabit Ethernet. Cluster interconnects move data between nodes, while system interconnects move data within a single system. If the interconnect is too slow, the processor will be idle waiting for data.

System interconnects are also used to move data addresses, which is necessary to support cache coherence. If the system interconnect is too slow in transmitting data addresses, the processor will again be idle waiting for data because it needs to know its address to access it. Fast interconnects provide high throughput and low latency (low time from the time a data request is made until the data begins to be transmitted).

The main technical difference between horizontal and vertical systems is the throughput and latency of their interconnects. For cluster interconnects, throughput can range from 125 MB/s for Fast Ethernet to 200 MB/s for SCI, and latency can range from 100 thousand ns for GBE and up to 10 thousand ns for SCI. Using the InfiniBand interface, it is possible to implement faster interconnects with peak speeds ranging from approximately 250 MB/s for the first version and up to 3 GB/s for subsequent ones.

Input and output

Fast I/O is necessary so that the interconnect can quickly retrieve data from disk and the network and transfer it to processors. A bottleneck in the I/O subsystem can negatively impact the performance of even the fastest interconnects and processors.

operating system

Even the best hardware is ineffective if the OS is not scalable enough. For horizontal systems, OS scalability is not so important, because no more than four processors are running on a single node or with a separate copy of the OS.

System Availability

Generally speaking, system availability largely depends on the type of architecture. In large SMP systems, RAS functionality is built into the system and supplemented with failover for two to four nodes. In horizontal systems, the RAS of individual nodes is worse, but improvements in these functions are achieved by replicating nodes multiple times.

Optimized Applications

Applications need to be optimized for the computing system architecture. It is easiest to write and optimize applications for SMP systems. Major commercial applications are optimized specifically for SMP systems and were even developed on them, which is why SMPs have dominated the mid-range and high-end systems market for the last ten years.

Application size

As noted, large SMP systems use high-speed interconnects to provide sufficient system performance. Horizontal systems may experience performance issues due to low throughput and high interconnect latency in cases where data needs to be transferred frequently between nodes. However, some applications do not require high interconnect speeds to achieve high performance—usually small applications and applications that can be easily replicated (for example, Web servers, proxies, firewalls, and small application servers). In such horizontal systems, each node performs a small task independently of the work of all the others.

For example, in a horizontal (or distributed memory) architecture, four processor nodes (each with separate RAM and dedicated or shared I/O subsystem) may use a network interconnect such as Gigabit Ethernet. This computing environment runs three types of workloads. The smallest load fits on one node, but as it increases, several nodes are required to complete it. According to experts, when performing one task on several nodes, performance deteriorates significantly due to slow inter-node interconnects. Small workloads that don't need to communicate with each other work well with a horizontal architecture, but running large-scale workloads on it presents challenges.

A large SMP system configuration may include, for example, up to 100 processors, 576 GB of shared memory, and high-speed interconnects. Such a system can handle all types of workloads because there is no communication between nodes and efficient communication between processes. All central processing units can simultaneously access all disks, all memory and network connections - this is a key feature of SMP systems (or vertical systems).

The question often arises about the advisability of placing small loads on large SMPs. Although this is technically possible, from an economic point of view this approach is not justified. For large SMPs, the acquisition cost per processor is higher than for small systems. Therefore, if an application can run on a small node (or several small nodes) without major management issues, scale-out is a better choice for deployment. But if the application is too large to run on a small node (or several such nodes), then a large SMP server will be the best option in terms of both performance and system administration.

Database-level performance

The main question here is to compare the performance of single medium and large SMP servers with a cluster of small servers (no more than four processors).

When discussing scalability, manufacturing companies use a number of technical terms. Thus, performance growth (Speedup) for SMP is defined as the ratio of application execution speeds on several processors and on one. Linear speedup means, for example, that on 40 processors an application runs 40 times (40x) faster than on one. The performance increase does not depend on the number of processors, i.e. for a configuration of 24 processors it will be the same as for 48 processors. The increase in cluster performance (Cluster speedup) differs only in that when calculating it, the number of nodes is taken, not the number of processors. Like SMP performance growth, cluster performance growth remains constant across different numbers of nodes.

Scaling efficiency characterizes the ability of applications, especially clustered ones, to scale across a large number of nodes. It is generally believed that scaling efficiency depends on the number of nodes participating in the measurement. SMP scaling efficiency is the increase in performance divided by the number of processors, and Cluster efficiency is the increase in performance of the cluster divided by the number of nodes in it. You need to understand what these parameters mean so you don't get the wrong picture because 90% scaling efficiency on two nodes is not the same as 90% scaling efficiency on four nodes.

In Fig. Figure 2 shows three graphs: ideal linear scalability, scalability of a 24-processor SMP server at 95%, and scalability of a cluster of two 4-processor servers at 90%. It can be seen that there are certain limitations on the scalability of databases in clusters (with horizontal scaling). Chaining many small servers together does not provide the scalability needed for medium to large applications. The reason for this is the bandwidth limitations of intra-cluster interconnects, the additional burden on database software associated with cluster management, and the difficulty of writing applications for distributed memory cluster environments.

Published benchmark results show, for example, that Oracle9i RAC (Real Application Cluster) has a performance gain of 1.8 and a scaling efficiency of 90%. This scalability efficiency may seem quite high, but in fact, scalability of 90% for four nodes is ineffective when compared to the results of large SMP servers.

Application-Level Performance

The application layer in a three-tier data center is very different from the database layer. Typically, applications at this level are stateless - in other words, no data is stored on the server itself, or only a small part of it is stored. This layer contains business rules for application services. Transactions come to the application level and are processed by it. When data needs to be written or read, transactions are passed to the database layer. Application servers tend to consolidate database connections because large numbers of connections have a negative impact on performance.

In most cases, the application server tier requires many more processors than the database tier per individual application service. For example, in the case of SAP R/3, this ratio is approximately 10 processors for each database processor, i.e., if SAP R/3 requires 20 processors for the database layer, then there should be approximately 200 processors for the application layer. The question is what is more profitable to deploy - 100 two-processor servers or ten 20-processor servers. Similarly, at Oracle the ratio of application processors to database processors is approximately 5 to 1.

It is believed that application servers do not need to be distributed across multiple nodes. Multiple copies of application software can be distributed across different physical servers of different capacities or across dynamic domains of large servers.

The number of processors required for the application layer will be approximately the same regardless of the computer architecture. The cost of purchasing hardware and software for a horizontal architecture will be lower, since the cost per processor is lower in this case. In most cases, horizontal systems can provide the performance required to meet the service level agreement. The costs associated with purchasing software licenses are approximately the same for both architectures.

At the same time, the costs of managing and maintaining infrastructure for a horizontal architecture may be higher. When deployed on horizontal systems, multiple copies of the OS and application server software are used. The costs of maintaining infrastructure usually grow in proportion to the number of copies of the OS and applications. Additionally, with a horizontal architecture, backup and disaster recovery becomes decentralized and the network infrastructure is more difficult to manage.

The cost of system administration is difficult to measure. Typically, models comparing horizontal and vertical application server deployments show that managing fewer, more powerful servers (vertical servers) is less expensive than managing many smaller servers. In general, when choosing the type of architecture to deploy an application layer, IT managers should carefully consider the cost of hardware acquisition.

Impact of Architecture on Availability

Availability is critical for modern data centers - application services must be available 24x7x365 (24 hours a day, 7 days a week, 365 days a year). Depending on the needs of a particular data center, different high availability schemes are used. To select a specific solution, it is necessary to determine the acceptable downtime (planned and unplanned). In Fig. Figure 3 shows how the percentage of availability affects the duration of downtime.

As availability requirements increase, so does the cost of the solution. Data center managers must determine what combination of cost, complexity, and availability best meets service level requirements. Data centers that require approximately 99.95% availability can deploy a single SMP server with RAS features such as full hardware redundancy and online maintenance.

However, to achieve availability greater than 99.95%, a cluster will be required. Sun Cluster software with HA (High Availability) failover provides 99.975% availability. HA failover uses a primary server and a hot standby; If the primary server fails, the backup server takes over its load. The time it takes to restart a service varies by application and can take several minutes, especially for database applications that require large data-intensive rollbacks to restore transactions.

If downtime of a few minutes is unacceptable for a data center, an active-active system can be a solution, where the application is deployed on two or more nodes so that if one of them fails, the others will continue to run the application. As a result, the outage will be very short (some clients report that it lasts less than 1 minute), sometimes the user may not even notice the node failure.

Vertical servers provide high availability by embedding many RAS features into a single server to minimize planned and unplanned downtime. In horizontal servers, functions that provide a high level of RAS are implemented not at the level of an individual server, but through duplication and placement of several servers. Due to different implementations of RAS features and interconnects, horizontal servers are typically cheaper per processor.

For a three-tier architecture, a good example of horizontal high availability is the deployment of Web servers. You can deploy many small servers, each with a separate copy of the Web server software installed. If one Web server goes down, its transactions are redistributed among the remaining healthy servers. In the case of application servers, they can be hosted on both horizontal and vertical servers, and high availability is achieved through redundancy. Whether deploying a few large SMP servers or many smaller ones, redundancy remains the primary way to achieve high RAS at the application level.

However, at the database level the situation changes. Databases are stateful and by their nature require, in most cases, data to be partitioned and accessible from all processors/nodes. This means that for high availability with redundancy, you need to use clustering software such as Sun Cluster or Oracle9i RAC (for very high availability).

conclusions

Both vertical and horizontal architectures have their niche in today's data center. While today's focus is on new technologies such as modular servers and parallel databases, the market remains in high demand for mid-range and high-end servers.

Vertical and horizontal systems can use the same software, OS, and even the same processors. The main difference that impacts price and performance is the interconnects used in each architecture. Horizontal servers use loosely coupled external interconnects, while vertical servers use tightly coupled interconnects that provide higher data transfer rates.

For the front end, horizontal servers typically provide the optimal solution in terms of performance, total acquisition cost, and availability. For the application layer, both vertical and horizontal architectures can be used effectively. For the database layer, the optimal solution is to use vertical servers, regardless of the required level of availability.

Among the numerous functions of an information system necessary for managing network logistics, we will first focus on two key “network” functions: assortment management and support for category management.

1. Assortment management in a network trading company.

Network retail trade enterprises, especially in the food sector, are characterized by the highest level of complexity of management tasks. A particularly challenging one is assortment management.

The better it is solved, the more efficiently the retail trade enterprise as a whole develops and the higher its competitiveness.

The assortment management task can be divided into two subtasks – “external” and “internal”.

The first is aimed at working with the buyer in terms of assortment, the second is aimed at facilitating the work of staff with assortment categories.

Successful solution of these problems should lead to improved product sales results.

For an effective solution "external" task group necessary:

  • 1) provide information about products to customers. Information and multimedia support systems are designed to help customers navigate the boundless sea of ​​goods, make the right choice and obtain valuable information in the shortest possible time. At the same time, they help retailers analyze consumer preferences, stimulate the sale of the necessary goods, optimize the layout of the sales floor, rationally place the assortment, which ensures the successful solution of external tasks of automation of assortment management;
  • 2) solve personal marketing problems. Implementation of the personal marketing function is one of the most important tasks of assortment management for the “supermarket” and “hypermarket” formats. Moreover, if for a supermarket it is most important to conduct targeted personal marketing with tracking fluctuations in the preferences of specific regular customers of a given store, then for a hypermarket it is important to work with typical groups of customers belonging to a conventionally defined category of regular customers. As for discounters, personal marketing is less relevant for them. To identify the preferences of regular customers, the availability in the information system of the ability to conduct a comprehensive analysis of sales and determine the structure of purchases is also an extremely important task;
  • 3) carry out high-quality visual merchandising. Effective display of goods on store shelves significantly increases sales volumes. To assess the quality of solutions to visual merchandising problems, the information system must be able to maintain and analyze planograms that describe the placement of goods on store shelves.

When deciding internal assortment management tasks it is necessary to automate the following business processes:

1) active assortment management process (maintaining assortment matrices).

The fact is that information about a product, once entered into the database, remains in it for a long time. For example, with a current assortment of 7,000 items of goods, the system can store 20–30 thousand items of goods. Under these conditions, it is necessary to provide the system user with the opportunity to work only with current information about the active assortment (Fig. 3.4).

Rice. 3.4.

To solve this problem it is required ensure the following functions:

  • introduction of goods into the active assortment. This process is usually preceded by a series of trial marketing activities with a given product, preparation of logistics and pre-sale preparation of the product;
  • cessation of purchases of goods, as the first phase of removing goods from the active assortment. Typical reasons for this process include:
    • a) dissatisfaction with the results of product sales;
    • b) change of assortment by the manufacturer;
    • c) presence of relationship problems with the supplier; and etc.;
  • cessation of replenishment of inventories from the company's distribution center;
  • cessation of work with the product, as the final phase of removing the product from the assortment in the information

system (usually occurs when reserves reach zero);

Deleting information about goods on cash register systems (carried out, as a rule, after an inventory).

Advantages of automating this business process :

  • convenience for users when working with the product range;
  • a significant reduction in the number of errors associated with the impossibility of including in documents a product that does not belong to the active assortment;
  • the ability to receive analytical reports only on the active assortment;
  • increasing the productivity of managers involved in assortment management; and etc.;
  • 2) the process of managing an active assortment of retail enterprises of various formats , included in the multi-format network trading enterprise (management of multiple assortment matrices).

Automation of this business process makes it possible to prevent the movement of goods to a management object, to the assortment matrices of which this product does not belong (Fig. 3.5).

Rice. 3.5.

It should also be noted that a high-quality solution to “internal” assortment management problems is of greatest importance for a multi-format chain retail trade enterprise.

2. Category Management Support Process through the formation of product views and views of management objects with which a specific category manager works.

For a manager involved in the management of specific product categories, combined into so-called strategic business units, when working with an information system, it is important to concentrate on a certain subset of products and management objects.

It is advisable for a category manager to see only what concerns “his product categories” so that the illusion is created that there is nothing in the information system except the goods included in his business unit and those management objects for which he is responsible.

It is necessary for the manager to create perspectives on product flow that would present logistics and analytical information through the prism of the strategic business unit with which he works within the framework of his functions.

To ensure work with the information system in this mode, it must implement the ability to assign product views and views of management objects.

At the same time, there are at least two basic types of product views - static and dynamic.

Each manager has his own product perspective, which defines his strategic business unit. In this case, managers responsible for the same business unit are assigned a single view.

In the case of defining a static product view, a set of products is actually recorded as a named list (Fig. 3.6). It is convenient for strictly fixing a set (for example, for conducting analysis).

Rice. 3.6.

In order to effectively administer product views for defining business units, it is better to define them on the nodes of the product classifier. Let's call such views dynamic (Fig. 3.7).

Rice. 3.7.

In this case, as soon as a new product is introduced into a specific subgroup, which is included in the dynamic perspective of the category manager, it automatically becomes an element of the strategic business unit, and the manager begins to work with it promptly.

When a product is moved to another subgroup (for example, due to a categorization change), it moves to another strategic business unit and is automatically transferred to another category manager for work.

The view of management objects is formed in a similar way - this is a static view that defines the list of stores and distribution centers within which a specific category manager operates (Fig. 3.8).

Rice. 3.8.

This approach allows system users, including manufacturers or suppliers of goods, to access information and the necessary functions of the information system within a certain subset of the active product range and corresponding trade objects.

This function is very important when implementing the VMI logistics concept, when a supplier or manufacturer is involved in managing the supply chain of “their” goods.

In conclusion, let us formulate several conclusions from the above:

  • 1) managing the assortment of a trading enterprise is the most important task, the quality of the solution of which directly determines its success;
  • 2) solutions to the external group of assortment management problems, especially large-format retail enterprises, are designed to provide customer information systems (information kiosks, multimedia terminals, information carts, etc.);
  • 3) the ability to maintain assortment matrices, product views and management object views in the information system facilitates the ability to solve an internal group of assortment management tasks, which is directly related to the quality of implementation of the category management function at a trading enterprise.

Information system scalability

As an online retail company develops, there sometimes comes a point when the information system can no longer support further business growth. Therefore, the question of the adequacy of the information system to the company’s growth is extremely important.

In this case, two aspects must be taken into account - adequacy to growth and scalability of the system.

If a company's growth is accompanied by a disproportionate increase in IT infrastructure costs, then the information system is not able to optimally support business expansion.

Information systems that are inadequate for the company's growth can lead to an accelerated increase in the costs of their operation.

First of all, the solution architecture must correspond to the growth of the company. When a company grows and has hundreds of facilities, building a system on a distributed architecture, in our opinion, means facing an ever-increasing increase in IT support costs per store.

In the context of a network company that manages a hundred or more retail outlets, it becomes increasingly difficult to synchronize data with their subsequent consolidation in the center and there comes a time when this becomes impossible.

To ensure scalability of the information system (the ability to provide the required number of users, operate with the required amount of information with satisfactory performance), it is necessary to choose the right platform - appropriate software and server hardware.

If a retail company is growing, the volume of sales information is calculated not in gigabytes, but in terabytes, and this cannot be done without the use of “industrial”, scalable database management systems such as Oracle, Progress, etc.

Operating systems will also be needed, with the help of which it would be possible to “migrate” to another class of computing equipment.

It is obvious that when choosing an information system and operating it, retail chain companies whose strategy involves rapid growth need to seriously think about the scalability and cost of ownership of the information system.

We are convinced that as a company grows, distributed architecture becomes a colossal obstacle to reducing the costs of business management and operating IT infrastructure.

The centralized architecture of the information system implies lower cost of ownership and does not require a constant increase in the number of IT personnel as the retail network grows.

The scalability of corporate systems means the ability to increase their power by connecting new hardware and software without additional modification of the latter. This point is important when using modern computer and network technologies. An example can be given of distributed data processing in a central bank and its branches.

Scalability is achieved at various levels: a) Technical; b) Systemic; c) Network; d) DBMS; d) Applied. For an OS, scalability means that the OS is not tied to a single processor architecture. If the tasks facing the user become more complex and the requirements placed on the computer network expand, the OS must provide the ability to add more powerful and productive servers and workstations to the corporate network. You can consider the scalability of hardware, software, and the scalability of the system as a whole. Scalability is based on technologies such as: a) International standards; b) Network and telecommunication technologies; c) Operating systems; d) Client/server technology and a number of other means.

End of work -

This topic belongs to the section:

Computer information technologies in management. Classification of control systems

The purpose of the CIS is to prepare for the use of modern information technologies within the framework of the CIS as a tool for solving scientific and practical problems in.. The concept of information technology Corporate information..

If you need additional material on this topic, or you did not find what you were looking for, we recommend using the search in our database of works:

What will we do with the received material:

If this material was useful to you, you can save it to your page on social networks:

All topics in this section:

Information technology concept. Corporate Information Technologies
Technology is a system of interconnected methods of processing materials and methods of manufacturing products in the production process. Information technology is a system of interconnected technologies

Information processing technology. Concept of interoperability, openness and modularity
Information is a set of facts, phenomena, events of interest that are subject to registration and processing. This implies the presence of two points: the source and the receiver (consumer) of information

Types of information systems support
Types of ASOEI support: a) Technical; b) Mathematical; c) Linguistic; d) Software. Information support – system of classification and coding of information, technological scheme of processing

Architecture of a corporate information system
The architecture of the CIS consists of several levels: a) Information-logical level – is a set of data flows and centers (nodes) of origin, consumption

Requirements for corporate information systems
The processes of active improvement of information processing technologies are a consequence of the fact that the following requirements are increasingly being imposed on modern information systems (CIS): a) structure

Heterogeneity of corporate information systems. Solutions to problems of heterogeneity in corporate information systems
The most important role is played by the issues of overcoming the problems of heterogeneity of corporate systems and ensuring the compatibility of the components included in its composition. Heterogeneity in computing systems can cause

International standards in the field of computer information technology
Currently, a set of standards for an enterprise quality system developed by ISO (International Standards Organization), or more precisely, by the ISO technical committee, has become widespread worldwide.

Information models of the control object
A modern enterprise can be considered as an effective information center, the sources of information of which are the external and internal business environment. External business environment –

Information support for corporate information systems
Information support - a system for classifying and encoding information, a technological scheme for data processing, regulatory and reference information, a document flow system, etc. Informational

Policy for the formation of information resources as a single information space
To ensure the interaction of information resources at various levels, it is necessary: ​​a) The use of modern information technologies; b) Modern transport information environment; c) Eat

Benefits of using computing systems
As a result of using multi-machine and multi-processor computers, it is possible to achieve the following advantages: 1) Increased productivity and speed

Communication equipment and communications equipment
Communication technology provides one of the main functions of management activities - the transfer of information within the management system and the exchange of data with the external environment, suggest

Operating systems (OS). OS technologies
Among system programs, the operating system (OS) occupies a special place. The operating system (OS) is understood as a set of programs that manage

OS Unix and structural solutions in corporate information systems. Mobility concept
The development of the Unix OS began with Bell Laboratories in 1968. A multi-user 32-bit Unix OS for Main Frame was proposed. In 1976, AT&T (which included B

The concept of computer networks and their characteristics
A computer network is a complex of geographically dispersed computers interconnected by data transmission channels for the efficient use of computing resources. Feasibility

Composition of computer networks
Computer networks include hardware, software and information tools. That is, a computer network can be considered as a system with hardware and software distributed throughout the territory.

Computer network architecture
In general, the architecture of computer networks can be considered from two points of view - this is the physical organization of a computer network (network topology) and the organization of the network at the logical level

Computer networks with a dedicated server and their characteristics
Client-server is a network architecture in which devices are either clients or servers. The client is the requesting machine (usually a PC), the server is the machine that responds.

Structure of global computer networks
Global networks (WAN, Wide Area Networks) are systems with broadband channels and allow you to organize interaction between computers over long distances. Ideally, a global computer

Scalability of computer networks
Scalability – the ability to increase network resources and subscribers. In computer networks with a dedicated server, workstations are connected to dedicated servers, and the servers, in turn, are grouped

Internet protocols
The protocol in this case is, figuratively speaking, the “language” used by computers to exchange data when working on a network. In order for different computers on a network to communicate, they must communicate

Internet addressing
The Internet is a worldwide system of interconnected computer networks, built on the use of the IP protocol and routing of data packets. The Internet forms a global information space, serving

Scalability is a property of a computing system that provides predictable growth in system characteristics, for example, the number of supported users, response speed, overall performance, etc., when computing resources are added to it. In the case of a DBMS server, two scaling methods can be considered - vertical and horizontal (Fig. 2).

With horizontal scaling, the number of DBMS servers increases, possibly communicating with each other transparently, thus sharing the overall system load. This solution is likely to become increasingly popular as support for loosely coupled architectures and distributed databases grows, but it tends to be difficult to administer.

Vertical scaling involves increasing the power of a separate DBMS server and is achieved by replacing hardware (processor, disks) with faster ones or adding additional nodes. A good example is the increase in the number of processors in symmetric multiprocessor (SMP) platforms. In this case, the server software should not be changed (in particular, the purchase of additional modules cannot be required), as this would increase the complexity of administration and worsen the predictability of system behavior. Regardless of which scaling method is used, the gain is determined by how fully the server programs use the available computing resources. In further assessments, we will consider vertical scaling, which, according to analysts, is experiencing the greatest growth in the modern computer market.

The scalability property is relevant for two main reasons. First of all, modern business conditions change so quickly that they make long-term planning, which requires comprehensive and lengthy analysis of already outdated data, impossible, even for those organizations that can afford it. In return comes a strategy of gradually, step by step, increasing the power of information systems. On the other hand, changes in technology lead to the emergence of more and more new solutions and lower hardware prices, which potentially makes the architecture of information systems more flexible. At the same time, interoperability and openness of software and hardware products from different manufacturers are expanding, although so far their efforts to comply with standards have been coordinated only in narrow sectors of the market. Without taking these factors into account, the consumer will not be able to take advantage of new technologies without freezing funds invested in technologies that are not open enough or have proven to be unpromising. In the area of ​​data storage and processing, this requires that both the DBMS and the server be scalable. Today, the key scalability parameters are:

  • support for multiprocessing;
  • architectural flexibility.

Multiprocessor systems

For vertical scaling, symmetric multiprocessor (SMP) systems are increasingly being used, since in this case there is no need to change the platform, i.e. operating system, hardware, and administration skills. For this purpose, it is also possible to use systems with massive parallelism (MPP), but so far their use is limited to special tasks, for example, computational ones. When evaluating a DBMS server with a parallel architecture, it is advisable to pay attention to two main characteristics of the architecture's extensibility: adequacy and transparency.

The adequacy property requires that the server architecture equally support one or ten processors without reinstallation or significant changes in configuration, as well as additional software modules. Such an architecture will be equally useful and effective both in a single-processor system and, as the complexity of the tasks being solved increases, on several or even multiple (MPP) processors. In general, the consumer does not have to purchase or learn new software options.

Providing transparency to the server architecture, in turn, makes it possible to hide hardware configuration changes from applications, i.e. guarantees portability of application software systems. In particular, in tightly coupled multiprocessor architectures, the application can communicate with the server through a shared memory segment, while in loosely coupled multiserver systems (clusters), a message mechanism can be used for this purpose. The application should not take into account the implementation capabilities of the hardware architecture - the methods of data manipulation and the software interface for accessing the database must remain the same and equally effective.

High-quality support for multiprocessing requires the database server to be able to independently schedule the execution of many queries to be served, which would ensure the most complete division of available computing resources between server tasks. Requests can be processed sequentially by several tasks or divided into subtasks, which, in turn, can be executed in parallel (Fig. 3). The latter is more optimal because proper implementation of this mechanism provides benefits that are independent of request types and applications. The processing efficiency is greatly influenced by the level of granularity of the operations considered by the scheduler task. With coarse granularity, for example, at the level of individual SQL queries, the division of computer system resources (processors, memory, disks) will not be optimal - the task will be idle, waiting for the end of the I/O operations necessary to complete the SQL query, at least in the queue to There were other queries that required significant computational work. With finer granularity, resource sharing occurs even within a single SQL query, which is even more evident when several queries are processed in parallel. The use of a scheduler ensures that large system resources are used to solve the actual database maintenance tasks and minimizes downtime.

Architectural flexibility

Regardless of the degree of portability, support for standards, parallelism and other useful qualities, the performance of a DBMS, which has significant built-in architectural limitations, cannot be increased freely. The presence of documented or practical restrictions on the number and size of database objects and memory buffers, the number of simultaneous connections, the recursion depth of calling procedures and subqueries or firing database triggers is the same limitation on the applicability of the DBMS, such as, for example, the impossibility of transferring to several computing platforms. The parameters that limit the complexity of database queries, especially the sizes of dynamic buffers and the stack size for recursive calls, should be dynamically configurable and not require stopping the system for reconfiguration. There is no point in purchasing a powerful new server if expectations cannot be met due to the internal limitations of the DBMS.

Typically, the bottleneck is the inability to dynamically adjust the characteristics of database server programs. Ability to determine on-the-fly parameters such as the amount of memory consumed, the number of busy processors, the number of parallel job threads (whether real threads, operating system processes or virtual processors) and the number of fragments of database tables and indexes, as well as their distribution on physical disks WITHOUT stopping and restarting the system is a requirement arising from the essence of modern applications. Ideally, each of these parameters could be changed dynamically within user-specific limits.



Have questions?

Report a typo

Text that will be sent to our editors: