There is an interesting trend in growth in the size of datasets that customers want to use for trials of our semiconductor yield management software. While some still have the more traditional approach of a few carefully selected datalogs it is becoming common for customers to want >30GiB of compressed data, that’s in the region of 300GiB uncompressed data, to be used in the trial.

For our system it is no problem and a pleasure to show off the system properly. No more saying, “Well this report would look better with more data….”. But infrastructure wise this was an issue in the past. In the past we had dedicated servers running with fixed total capacity so to think about a trial that could require up to 100GiB of additional storage on the server meant physically changing disks which is a big overhead. But since MFG Vision switched our hosting over to Amazon Web Services (AWS), to add a dedicated 100GiB storage volume is just a few mouse clicks (well a little more to it than that but I’m not the IT guy!).

From a customer success point of view this is revolutionary. Now with multiple potential customers requiring big data trials at the same time it is not a problem and the cost is manageable as it is only for the life of the trial.

The growth that we’re seeing is quite rapid. In 2015 a large dataset for a trial was around 3GiB, by 2016 more like 15GiB and now in 2017 >30GiB.

Exactly why the value of around 30GiB keeps coming up with different customers is not clear, maybe it is beyond the limitation of some old systems. But with the advantages of private cloud computing we live in a golden age of data.