Software for determining focus

So I bought one of these cheapo USB “microscopes” to play with an I’m having a lot of fun playing with it

https://www.amazon.com/Ninyoon-Microscope-Endoscope-Compatible-Cellphones/dp/B09SNSPHXX/ref=sr_1_2_sspa?keywords=4k+usb+microscope&qid=1667940795&sprefix=4k+usb+micros%2Caps%2C75&sr=8-2-spons&psc=1

I’m considering mounting it to my MPCNC for a little project with the kids but feature it doesn’t have is auto focus. I’m whipping up a little servo motor with a thick rubber band rig to turn the dial and mount it but know very little about software based auto focus.

Looking for a software solution that can take a usb webcam (UVC) and determine the directions the stepper needs to move to improve focus and send a signal to the driver board. Not even sure what compute power I need to process this data. Would a Pi zero handle it or does it need a full PC?

Looking at some of the OpenCV code but it’s a mighty big hammer for a small project.

Any ideas?

I don’t know anything off the shelf, and I haven’t read anything about how to do it. But that won’t stop me from guessing :slight_smile:. I could probably do some googling, but it’s more fun to just guess.

I think what you need is something that takes in an image, or a snippet of an image, and determines the amount of sharpness in the image. The code would have to take an image at focus X, and then at X+1. The code would move in the direction of increasing sharpness, measuring at regular intervals until it found a maximum.

My guess is the sharpness measurement could be something as simple as a sobel edge detector, sent through an absolute value function to make all the edges positive (or possibly pixelwise square them to make very sharp edges worth more), and then summed up over the entire image. Images with stronger edges would have a larger sum. In the end, you want this function to come down to a single number. The actual number will depend on what is in the scene, but the relative number should be determined by sharpness.

Some things you could play with:

  • Different sizes of image clips. Potentially sampling just 64x64 would make it work much faster and be more reliable in that smaller area
  • There are many different edge detection algorithms. Some might work better than others. Taking the edge detection twice might even help.
  • Removing small numbers from the edge image could make the effect more pronounced. You don’t want 100 pixels with an edge of 1 to be equal to 10 pixels with an edge of 10. The 10 pixels of strength 10 are worth more.
  • Take the max instead of the sum. A very sharp edge is probably a good sign that you are in focus
  • Playing with exit criteria. If you can’t find a max, or the max isn’t strong enough, maybe do an entire sweep from one end of the focus to the other. Or maybe slow down to get a better max once you’ve found a good range

Personally, I would start with a big computer connected to it and do the coding in something easy to change. OpenCV has a lot of neat functions and utilities built in, so you can do things like draw the intemediate images, and make sure they are doing what you want. The matrix/image objects and image functions handle all the little “off by one errors” without you having to worry too much.

Once you have something that works, you can drill down to the actual pixel level operations, and I bet you aren’t doing much. A python script with the python image library (is pillow the latest?) would probably be good at capturing the image, doing pixel wise image manipulation, and then calling whatever service you need to adjust the servo. But I would make it bulky, and clunky, but clear until I knew it was working. Then I would optimize it to remove some chunk parts like opencv.

4 Likes

I would agree with Jeff, that something fairly simple, like a high pass filter and then RMS or sum of absolute values should be enough.

When you are far from the ideal focus the high pass might pass almost nothing, in which case you could downsample to a much lower resolution and use the metric on that (basically band pass) to get close.

Someone could make an octoprint plugin that takes the image and moves the machine in Z to optimize focus. That could be useful for other things, like homing Z. Those usb microscopes are pretty sensitive to Z. The downside is that they have to be close to the subject, so you risk crashing into the workpiece unless it is removable.

3 Likes

@jamiek between you and @jeffeb3 my productivity at work takes a dive just researching new acronyms. I’ll take a stab at it but I’ve never done any image manipulation programming so its going to be a steep learning curve. May become my winter project.

Thanks!

1 Like

This is a big deal for amateur astrophotographers who use webcams for imaging.

If you use your search terms, plus astrophotography, focus, camera, software you light get a good hit

I used to use a software solution for focusing a $12 webcam on my 5” Newtonian- but I’ve been out of that hobby 10 years, and 2 laptops ago

2 Likes

Good idea! I wonder if the focus algorithms would work the same or if the star on black background simplifies the process. More to research!

Most of our phone have an auto-focus capability. I’m sure the feature was developed in a university student project somewhere. I’d start by googling thesis and doctorate papers dealing with image processing. The algorithm may be simple to port to your preferred programming environment.

Mike

It also works for microscopes. I should have specified that I also use it in my lab for automated microscope work

You open up a live preview, select a small spot - star in original use case, small speck of something in microscope case- then it ‘racks focus’ and finds the focal point where the image is smallest. Out of focus images ‘smudge’ and look bigger than they really are

So I found this gem:

Which taught me about Laplacian filters
https://homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm

which led me to write this:


#pip install opencv-python
#pip install pyautogui
#https://pyautogui.readthedocs.io/en/latest/

# import the opencv library

import cv2
import pyautogui

focussize = 100
aoistart = 100,100
aoiend = 150,150

def onMouse(event, x, y, flags, param):
    if event == cv2.EVENT_LBUTTONDOWN:
       # draw circle here (etc...)
       showfocus(x,y)

def variance_of_laplacian(image):
	# compute the Laplacian and return variance
	return cv2.Laplacian(image, cv2.CV_64F).var()

def showfocus(x,y):
    # draw a focus square

    global aoistart
    global aoiend
    startx = int(x-(focussize/2))
    starty = int(y-(focussize/2))
    endx = int(x+(focussize/2))
    endy = int(y+(focussize/2))

    aoistart = startx, starty
    aoiend = endx, endy
    color = (255, 100, 100)
    thickness = 2
    print('x = %d, y = %d'%(x, y))

  
# define a video capture object
vid = cv2.VideoCapture(0)
fgbg = cv2.createBackgroundSubtractorMOG2()



cv2.namedWindow("Main", cv2.WINDOW_AUTOSIZE)

ss = cv2.getWindowImageRect('Main')
print(ss[2])
print(ss[3])



cv2.setMouseCallback('Main', onMouse)
  
while(True):

      
    # Capture the video frame
    # by frame
    ret, frame = vid.read()
    
    image = frame

    

    roi = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    roi = roi[aoistart[1]:aoiend[1], aoistart[0]:aoiend[0]]


    fm = round(variance_of_laplacian(roi),2)




        # font
    font = cv2.FONT_HERSHEY_SIMPLEX
    
    # org
    org = (50, 50)
    
    # fontScale
    fontScale = 1
    
    # Blue color in BGR
    color = (255, 0, 0)
    
    # Line thickness of 2 px
    thickness = 2
    

    
    # Using cv2.putText() method
    image = cv2.putText(image, str(fm), org, font, 
                    fontScale, color, thickness, cv2.LINE_AA)

    image = cv2.rectangle(image, aoistart, aoiend, (100,100,100), 1)
  
    # Display the resulting frame
    cv2.imshow('Main', image)
      
    # the 'q' button is set as the
    # quitting button you may use any
    # desired button of your choice
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
  
# After the loop release the cap object
vid.release()
# Destroy all the windows
cv2.destroyAllWindows()




# Research
# https://github.com/PyImageSearch/imutils
# pip install imutils
#https://pyimagesearch.com/2015/09/07/blur-detection-with-opencv/

Basically it takes your cam image and returns a “sharpness” value for the selected box (higher is sharper) still needs a lot of work to get where I want it but someone might find it helpful.

3 Likes