By Jason Bent, Principal R&D Engineer, MFG Vision

In the manufacturing of physical, non organic, items there are, roughly speaking, three areas:

  1. Subtractive: start with a block of stuff and remove bits until you are left with the object you want
  2. Additive: build up increasing amounts of stuff until you have the thing you want
  3. Assembly: combine several items together (made by one of the first two techniques) until you have the thing you want, usually a ‘complex’ machine.

We can think of data in similar terms. Subtractive: start with a big table and filter out bits until you have just the bits you’re interested in. Additive: stick together small sets of data of a common type until you have the set you’re interested in. Last, assembly: put together data from different sources and types to create a big picture, complex machine.

In physical object manufacturing, additive techniques have enjoyed something of an resurgence in recent years. Additive 3D printing now offers rapid production of mechanically viable parts, not simply pre-production models that give stakeholders warm fuzzy feelings before signing off that six or seven figure tooling cheque.

Manufacturing Data Analysis

When we look at how manufacturing data analysis is typically done we notice something a bit odd. Generally we do additive and then when we want to investigate our data we do subtractive. More specifically we add together many sets of data into a few huge tables and then when we want to look at that data we apply some query filter to the superset to then produce a smaller subset that is of interest to us.

So what? Well the thing is this is both inefficient and also inflexible. It’s inefficient because it has redundant steps in the process but also, significantly, because it requires a super set that is huge. Big enough to contain all the data we are possibly interested in.

This brings me on to inflexibility. Look at the title of this blog; In non symbolic form ‘the union of sets A and C is a subset of the union of sets A,B,C and D’. What if we were interested in AE? We can’t do it because we never included E in our original set of everything we could possibly be interested in. With an additive approach it wouldn’t matter we hadn’t included E to start with.

Smaller Chunks

It looks to me that the reason we do this is historically driven by the tools we use. SQL type databases, spreadsheets and the like are strongly biased toward subtractive techniques, or filters to use the more common term. Maybe we need to turn our thinking on its head. Instead of trying to work out everything that we may possibly be interested in and lumping it all together, to be carved out later, why not store our data in smaller junks. The trick is to organise those junks in a way you can get at them efficiently.

Turns out computers are actually very good at doing this. In fact when you want to combine your small junks together into a bigger junk to work with you don’t even need to move those small junks around. They can stay where they are.

Secret Sauce

There is one last trick. I mentioned in the first paragraph assembly as our third technique. Well here’s where the secret sauce is. Combining like data is easy enough but combining disparate sources in an on-demand additive manner is harder. But that’s all I’m saying on that, if you want to know more you’ll have to speak to us.