How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
# Warning: Pipeline type "image-to-text" is no longer supported in transformers v5.
# You must load the model directly (see below) or downgrade to v4.x with:
# 'pip install "transformers<5.0.0'
from transformers import pipeline

pipe = pipeline("image-to-text", model="Yodazon/3DPrintFailureType")
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("Yodazon/3DPrintFailureType", dtype="auto")
Quick Links

This model is to help determine the type of problem a 3D print has. The model uses AlexNet CNN Architecture built using PyTorch

The model trained on images of 3D prints as they are printing as well as post printing. Training set of images is about ~5GB

Current version has 4 outputs:

  1. Good
  2. Spaghetti
  3. Stringing
  4. Overextrusion

Of its current iteration, the Model can not determine during an inference if the input is an actual 3D Print or Not.

Future updates will include

  • Determine if the image is a 3D print or not
  • Determine if the image is during printing or once complete

To make an inference

Classes

class_names = {0: 'good', 1: 'spaghetti', 2: 'stringing', 3: 'underextrusion'}

Pre-Process the image using the following python function

def preProcess(image):
    # Open the image from raw bytes
    image = Image.open(BytesIO(image)).convert('RGB')

    transform = transforms.Compose([
        transforms.Resize(227),
        transforms.CenterCrop(227),
        transforms.ToTensor(),
        transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
    ])

    input_image = transform(image).unsqueeze(0)
    return input_image
Downloads last month
12
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support