Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have come up with a new approach to streamline the knitting process: a new system and design tool for automating knitted garments. The new system called ‘InverseKnit’, translates photos of knitted patterns into instructions that are then used with machines to make clothing.

A paper on InverseKnit was presented by Alexandre Kaspar, CSAIL Phd student, alongside MIT postdocs Tae-Hyun Oh and Petr Kellnhofer, Phd student Liane Makatura, MIT undergraduate Jacqueline Aslarus, and MIT professor Wojciech Matusik, at the International Conference on Machine Learning this past June in Long Beach, CA.

Programming machines for designs can be a tedious and complicated ordeal— when every single stitch has to be specified and even one mistake can throw off the entire garment. An approach like ‘InverseKnit’ could let casual users create designs without a memory bank of coding knowledge, and even reconcile issues of efficiency and waste in manufacturing, according to a press release by MIT.

“As far as machines and knitting go, this type of system could change accessibility for people looking to be the designers of their own items,” said Kaspar. “We want to let casual users get access to machines without needed programming expertise, so they can reap the benefits of customisation by making use of machine learning for design and manufacturing.”

In another paper, researchers came up with a computer-aided design tool for customising knitted items. The tool lets non-experts use templates for adjusting patterns and shapes, like adding a triangular pattern to a beanie, or vertical stripes to a sock. Users can make items customised to their own bodies, while also personalising for preferred aesthetics.

To get InverseKnit up and running, the team of CSAIL first created a dataset of knitting instructions, and the matching images of those patterns. They then trained their deep neural network on that data to interpret the 2-D knitting instructions from images. When testing InverseKnit, the team found that it produced accurate instructions 94 per cent of the time.

“Current state-of-the-art computer vision techniques are data-hungry, and they need many examples to model the world effectively,” says Jim McCann, assistant professor in the Carnegie Mellon Robotics Institute. “With InverseKnit, the team collected an immense dataset of knit samples that, for the first time, enables modern computer vision techniques to be used to recognise and parse knitting patterns.”

While the system currently works with a small sample size, the team hopes to expand the sample pool to employ InverseKnit on a larger scale. Currently, the team only used a specific type of acrylic yarn, but they hope to test different materials to make the system more flexible.

While there’s been plenty of developments in the field – such as Carnegie Mellon’s automated knitting processes for 3-D meshes – these methods can often be complex and ambiguous. The distortions inherent in 3-D shapes hamper how one understands the positions of the items, and this can be a burden on the designers.

To address this design issue, Kaspar and his colleagues developed a tool called ‘CADKnit’, which uses 2-D images, CAD software, and photo editing techniques to let casual users customise templates for knitted designs. The tool lets users design both patterns and shapes in the same interface. With other software systems, there is a possibility to lose some work on either end when customising both.

“Whether it’s for the everyday user who wants to mimic a friend’s beanie hat, or a subset of the public who might benefit from using this tool in a manufacturing setting, we’re aiming to make the process more accessible for personal customisation,” said Kaspar.

The team tested the usability of CADKnit by having non-expert users create patterns for their garments and adjust the size and shape. In post-test surveys, the users said they found it easy to manipulate and customise their socks or beanies, successfully fabricating multiple knitted samples. They noted that lace patterns were tricky to design correctly and would benefit from fast realistic simulation.

However the system is only a first step towards full garment customisation. The authors found that garments with complicated interfaces between different parts such as sweaters, didn’t work well with the design tool. The trunk of sweaters and sleeves can be connected in various ways, and the software didn’t yet have a way of describing the whole design space for that.

Furthermore, the current system can only use one yarn for a shape, but the team hopes to improve this by introducing a stack of yarn at each stitch. To enable work with more complex patterns and larger shapes, the researchers plan to use hierarchical data structures that don’t incorporate all stitches, just the necessary ones.

“The impact of 3-D knitting has the potential to be even bigger than that of 3-D printing. Right now, design tools are holding the technology back, which is why this research is so important to the future,” said McCann.

A paper on the design tool was led by Kaspar alongside Makatura and Matusik.