4570

Automated building bricks (aka Lego) sorter using AI to drive classification of items. Ai is trained from automatically created 3d rendered images.

This project is to develop and build an automatic building bricks sorter.  The overall machine is based on a 3d printer frame utilising a core XY mechanism to move bricks of a similar shape and size into storage bags.
Unsorted pieces are placed in a hopper and the machine started.  An elevator and vibration tray are used to feed single pieces onto a conveyor.  Reflective IR sensors are used to ensure a steady flow of single pieces is established.   Once a piece is placed on the conveyor another IR sensor ensures that the piece is moved to be within the viewpoint of a Maxim 78000 Feather board.  The Maxim 78000 contains a pre-trained Convolutional Neural Network(CNN) which then identifies the piece id through the on board VGA camera.  The id of the piece is then transferred via a serial link to the Arduino controlling the mechanical parts of the sorter.  If the piece id has been previously seen in this session then the XY mechanism guides the piece to the same outlet used previously.  If there is an unused outlet it is allocated to that piece id and the piece placed in that outlet. If all outlets are already allocated then the piece is transferred to the recycle outlet.  Equally if the CNN fails to identify the piece or has a low confidence in its accuracy the piece is sent to the recycle outlet.
The Maxim board and conveyor are built as a self contained unit and could be daisy chained i.e. the recycle outlet of one may be the input to the next one.  This could allow an easy increase in capacity or could allow specialisation of each unit.
The most difficult part of this design is the training of the CNN's.  This suggests a virtual learning approach utilising 3d CAD data.  Fortunately there is a community maintained data base of building brick CAD data.  This is available under the LDraw.org website.  The database used for development is freely downloadable from the www.ldraw.org web site.  As it is actively maintained the officially released version changes every quarter.  The specific version used contains 6773 items in the parts folder plus sub parts in other folders.  The aim was to translate this CAD data into rendered images which could then be used to train the CNN.  This turns out to be a six stage process as follows.
  1. Create some software to convert the LDraw format of data into a standard 3d model format to allow rendering.
  2. Create another software program to render the pieces under adjustable lighting conditions and with varying orientations, creating many still images suitable for CNN training.
  3. Utilize standard AI software (PyTorch) to create and train the CNN
  4. Transfer the trained CNN to the Maxim 78000 board and test the accuracy and performance of the CNN 
  5. Build the mechanics
  6. Evaluate and improve

Completing all these steps within the constraints of the Elektor competition timeframe was not possible.  The project is currently suspended in step 4 where some technical issues have delayed progress.  I hope to pick up the project again later in the summer.