We host close to 50% of our customers on AWS, including several large customers. For start-ups this means no need of an IT department to manage your data. As an extra service for you, we can even deal with the subcons directly. It’s up to you. You can just concentrate on solving the yield issues and leave any data issues to us.
Customers that choose to host their yieldHUB system with us get the benefit of one of the largest cloud platforms in the world, Amazon Web Services. Amazon Web Services holds 42% share of the global cloud provider market. MFG Vision utilise multiple Amazon Web Services instances in Europe, North America and Asia. Meaning we can place your data close to your engineers, which gives great responsiveness. The huge network bandwidth Amazon have in place between their global regions means if your data needs to move to another region we can do that very quickly. We recently moved one large customer’s data from North American to London overnight and with zero downtime during their local working day. Having engineers on several continents also means whatever the time of day there will be a human being awake somewhere ready to respond to any issues.
Backup of your data on Amazon Web Services is very transparent. It has such a negligible impact on the performance of the live system that we do this several times a day, not simply daily or weekly. We can even schedule customers independently of each other to fit their individual data loading patterns.
For customers that choose to host their own hardware on site cloud infrastructure still plays an important role. We maintain offline functional clones, minus restricted data, of customers installation which we can boot in seconds in order to do specialist customisation work for a customer or investigate any customer specific server issues. In many cases we can replicate an issue remotely, fix it and supply a patch incredibly quickly.
There is an interesting trend in growth in the size of datasets that Fabless customers want to use for trials of our semiconductor yield management software. While some still have the more traditional approach of a few carefully selected datalogs it is becoming common for customers to want >30GB of compressed data, that’s in the region of 300GB uncompressed data, to be used in the trial.
For our system it is no problem and a pleasure to show off the system properly. No more saying, “Well this report would look better with more data….”. But infrastructure wise this was an issue in the past. In the past we had dedicated servers running with fixed total capacity so to think about a trial that could require up to 100GiB of additional storage on the server meant physically changing disks which is a big overhead. But since we switched our hosting over to Amazon Web Services (AWS), to add a dedicated 100GB storage volume is just a few mouse clicks (well a little more to it than that but I’m not the IT guy!).
From a customer success point of view this is revolutionary. Now with multiple potential customers requiring big data trials at the same time it is not a problem and the cost is manageable as it is only for the life of the trial.
The growth that we’re seeing is quite rapid. In 2015 a large dataset for a trial was around 3GB, by 2016 more like 15GB and now >50GB.
Exactly why the value of around 30GiB keeps coming up with different customers is not clear, maybe it is beyond the limitation of some old systems. But with the advantages of private cloud computing we live in a golden age of data.
Can yieldHUB Help You?
Contact yieldHUB today! Our global sales and support team will be happy to help.