Once, when we were talking with a client about implementing a large B2B system, he asked me directly: “Can Magento handle it?”.

There’s a lot said about Magento being big, slow, cumbersome – and this is no joke – at the beginning of the project, 50,000 products and plans to expand to 150,000 after a year. In addition, initial traffic is planned at around 70,000 UU with growth (in a year) to 250,000 UU.

The client’s question, just like every good, direct question, forced me to analyze the issue deeper. I like going over technical stuff, so I decided to take a harder look at the structure of the application and database, as well as to perform engineering tests and check benchmarks – here, around the world, and in Divante’s experience.

I also had the opportunity to revisit some white papers about Magento’s capabilities. I recommend reading them, but they are very technical in nature; you can download them from Magento’s website.

Conclusions from the exercise undertaken

  1. The Professional version DOES NOT differ in its database structure from the Community version. This is the primary factor influencing productivity. The Entity Attribute Value (EAV) database model used in Magento provides outstanding flexibility, but is accompanied by reduced output. Magento employs table flattening, EAV data indexing and internal compilation, which compensate for the drop in efficiency. These mechanisms function in the same way in every version of Magento.
  2.  This is confirmed by the white paper on scalability of the Enterprise version, which I referred to earlier. All of the methods listed in it, except for FullPage Caching, can be applied in the Community version.
  3. Magento is one of few platforms whose code I have had the opportunity to analyze that supports database replication (Master-Slave and Master-Master) at the configuration level. When we created Ganglib, our PHP library, the idea to separate queries based on entry/readout came from Magento (then version 0.9!).
  4. Benchmarks show that much more is possible.
    A bit of source information:
    – people write about 200,00 – 500,000 products, we observed no system slowdowns between 50,000 and 72,000
    these are answers from members of Magento and other people with extensive experience, including one interesting response: The research from Dmitriy also shows that Magento scales really well when using more core processors (slide 25) or using multiple web nodes slide 27 (front-end servers). As you can see in slide 28, a Magento checkout page from a store with 10k products and 100 concurrent connections can process 80,000+ orders per hour (22 per second).

Tests that we performed (tests performed on a dedicated server – 4GB RAM, SSD disk for the database, initial MySQL Tuning consisting in expanding Cache and memory of InnoDB indexes):

  • test on a base of 72,000 generated products: Magento works by touch, the purchase path works, the administrator’s panel works. Cache and APC were turned on in the application, while compilation was turned off.
  • test on a base of 250,000 generated products: Magento works by touch, the purchase path works, the administrator’s panel – needs some minor tweaks (in the part concerning products) – cache was turned on, compilation was left off, APC was used.

The tests were not performed under a load (one active user + site penetration scenarios).

Largest Magento implementations in the world:

Magento can be very effective, because it is written well and designed well – and that is the foundation of efficiency and scalability. Attention should also be paid to equally good eCommerce site architecture and the technological concept during the implementation phase.

If anyone’s interested in details – how to test, how to maximize efficiency, or an analysis of requirements for some interesting project – just write.