How To Deploy Bin Picking Studio With ABB Robotics Add-In In 6 Steps

By Pavel Soral || February 9, 2026

On December 11, 2025, we joined forces with ABB Robotics for an exclusive webinar, “Mastering Complex Automation with 3D Vision-Guided Robotics.” 

The goal was to demonstrate a major leap forward in industrial automation: the crumbling of traditional walls between advanced 3D vision systems and robotic cells.

For years, manufacturers have requested a unified approach to vision-guided robotics – one that eliminates complexity and accelerates deployment. We answered that call by integrating our Locator Studio and Bin Picking Studio directly into the ABB Robotics One ecosystem and OmniCore controller. 

In the guide below, we will walk you through the technical specifics of this integration. From hardware connections to the final block programming wizard. 

Here is exactly how to launch your first AI bin picking application using the new Photoneo Add-in for ABB.

Walkthrough of Photoneo and ABB Robotics AddIn integration, running Bin Picking Studio

1. Hardware Connection Essentials

The foundation of a reliable application starts with the physical setup. First, select the appropriate scanner for your scene: use the PhoXi 3D Scanner for static environments or the MotionCam-3D if your application involves dynamic scenes or requires meshing the area directly in front of the robot.

Next, connect the industrial PC housing the Locator or Bin Picking Studio license. This PC features six specific Ethernet ports. You must connect Port 1 for robot communication and Port 2 for the internet connection. Ports 3 through 6 are reserved for your scanning devices.

Hardware connection specification

2. Installing Add-In

Installation is a quick, seven-minute process handled directly on the robot controller. Download the installation package from the Photoneo website to a USB stick and insert it into the ABB FlexPendant. 

Navigate to Controller Software in the main menu, select Install new add-in, and choose the Photoneo package.

The most critical step here is selecting the correct template to match your physical configuration. 

Hand-eye configuration (left), Extrinsic configuration (right) showing a smaller and larger 3d sensor
Hand-eye configuration (left), Extrinsic configuration (right)

If your camera is mounted on a stand, choose the Extrinsic Setup (selecting “Basic” for single devices or “Multiple Vision” for multiple). If the camera is mounted on the robot flange, choose the Hand-Eye Setup. For hand-eye, you can specify if the robot should stop for scans (“Multi-view Static”) or scan while moving (“Multi-view Dynamic”).

The installation process is straightforward and takes about seven minutes – just enough time to grab a coffee.

  1. Download & Save: Download the installation package from the Photoneo website and save it to a USB stick.
  2. Insert USB: Plug the stick into the USB port on the ABB FlexPendant.
  3. Select Add-in: Go to the main ABB menu -> Controller Software -> Install new add-in.
  4. Install: Select the Photoneo add-in and click “Next.”
  5. Choose Template: This is critical. Select the template that matches your physical setup:
Difference between extrinsic and hand=-eye 3D camera configuration

3. Network Configuration

After installation, a new Photoneo icon will appear in the main ABB menu. Opening this reveals the network settings, where you can toggle between Public or Private networks.

The system comes with predefined addresses for the visual controller and robot system (typically via the management port or DSQC 1100 I/O card). 

If you need to input custom IP addresses, the system will validate them immediately, a flashing green “Save” button indicates success. 

For private networks, the “Redirect to Photoneo Server” feature allows you to control both the robot and Photoneo software from a single interface.

Once installed, open the new Photoneo icon in the main ABB menu. You will see three sections:

Photoneo Addin setup screen in ABB Robotics pendant
  • Manual (Left): A guide to help you get running quickly.
  • Network Settings (Middle):
    • You can choose Public or Private networks.
    • Addresses are predefined for the visual controller and robot system (expecting a connection to the management port or DSQC 1100 I/O card).
    • If you change an IP, the system validates it; the “Save” button flashes green upon success.

Redirect to Photoneo Server (Right): A powerful feature for Private networks that gives you full control over both robot functions and Photoneo software in one place.

4. Setting Up Bin Picking Studio (BPS)

Close up of robot with hand-eye 3D camera picking a small part from the bin

With the network active, you move to the Bin Picking Studio to define the application logic. 

Start by creating a New Solution with a unique ID, then define the hardware: select your robot model, upload a CAD model of your gripper (STL under 1MB), and set the Tool Center Point (TCP). You will also define the grasping method here, including approach vectors and linear paths.

Selection of robots in Bin Picking Studio - GUI
Selection of robots in Bin Picking Studio

Next, configure the vision and environment. Add your vision system and select the neural network for object detection. 

In the Environment tab, you can import STL models of your work cell to define collision objects and trigger test scans to verify that the point cloud aligns with your digital robot model. 

Finally, under Settings, you can fine-tune picking priorities and enable “Automatic Snapshots” to aid in troubleshooting failed picks.

Here’s a quick step by step overview:

Step A: Project & Hardware

  • New Solution: Create a project with a Unique ID (used by the robot program to call this specific solution).
  • Robot: Select your robot’s name, reach, and payload.
  • Gripper: Upload a CAD model (STL file under 1MB), define the Tool Center Point (TCP), and set the invariance.
  • Grasping Method: Define path stages (e.g., Approach, Linear Path, Position Tolerance). You can trigger predefined robot routines here.

Step B: Vision & Environment

  • Vision System: Add up to four systems. configure the calibration type, sensor ID, and scanning profile.
  • Localization: Choose the neural network for object detection and initial gripping point placement.
  • Environment:
    • Scene: Set collision objects (simple shapes or imported STLs of your work cell).
    • Robot: Jog the robot, see current positions, and set axis limits.
    • Vision: Trigger a scan to verify the point cloud lines up with your robot model.
  • Settings: Fine-tune picking priority, angle limits, and collision parameters. Tip: Turn on “Automatic Snapshots” for failed actions to help with troubleshooting.

Step C: Deploy

  • Production Mode: Fully ready for operation; accepts requests directly from the robot.
Bin picking studio deployment screen
  • Simulation Mode: Simulates robot movements using either real or simulated data. Great for testing joint limits and gripper design.
Bin Picking Studio environment view

5. Robot Program & Calibration

Now, modify the robot’s program to match your application.

Before running the system, you must adjust the robot’s internal program. First, enable the firewall in RobotStudio, specifically for “Rapid Sockets” on the private network, and restart the controller.

You then need to teach the robot specific physical positions. Using the correct Work Object and Tool, jog the robot to define the Home, Start Bin Picking, and End Picking positions. 

You must also map your gripper’s open/close signals to the AttachGripper and DetachGripExcerptper routines.

For calibration, place the appropriate tool in the workspace – a ball for extrinsic setups or a marker pattern for hand-eye. 

Close up on marker pattern
Marker Pattern

The CalibPositions routine contains nine target positions. Adjust these targets so the vision system has a clear view of the marker in every pose without colliding. Run the calibration routine and ensure the final result is under 2 mm.

A close up on a bin with diverse parts and red & blue patterns of structured light
  1. Firewall (Crucial):
    • Enable the firewall in RobotStudio.
    • Enable firewall for Rapid Sockets on the private network.
    • Restart the robot controller.
  2. IP Configuration: In the Photoneo web app, enter the correct IP addresses for the robot and visual controller.
  3. Tool & Load Data: Enter this manually or run the LoadIdentify service routine.
  4. Teach Targets: Jog the robot (using the correct Work Object and Tool) to teach:
    • Home Position.
    • Start Bin Picking Position.
    • End Picking Position.
  5. Signal Logic: Map your gripper signals (open/close) to the AttachGripper and DetachGripper routines.

Calibration Routine

  1. Prepare: Place your calibration tool (Marker Pattern for Hand-Eye, Ball for Extrinsic) in the working area.
  2. Teach Positions: The CalibPositions routine has nine targets. Adjust them so the vision system sees the marker in every pose without collision.
  3. Run Calibration: Run the Photoneo calibration routine on the robot.

Check Result: Aim for a result under 2 mm.

6. The Wizard: Block Programming

The view of Wizard Block Programming GUI in ABB Robotics pendant

You don’t need deep programming skills. 

The final step is building the application logic using the Block Programming Wizard, which eliminates the need for complex coding. You will see blocks labeled HE (Hand-Eye) or X (Extrinsic).

A standard workflow begins with an Initialization Block (containing IPs and home positions), followed by a While Loop set to TRUE for continuous running. Inside the loop, you simply stack the necessary actions: a Scan Block to perform localization, a Pick Block to grab the part, and a Place Block to release it. 

The Place block even includes an “Approach” parameter that automatically calculates a safe position 100mm above the drop-off point. Once the blocks are arranged, simply click “Apply” to deploy the application.

The Wizard allows you to build the application logic using simple blocks labeled HE (Hand-Eye) or X (Extrinsic).

Sample Logic Flow:

  1. Initialization Block: Input the Vision Controller IP, Vision System ID, Home, Start, and End positions.
  2. While Loop: Set argument to TRUE for continuous operation.
  3. Scan Block:
    • Scan Pose: Use an existing position or create a new one.
    • Vision ID: Selects which system performs the localization.
    • Wait Time: Optional parameter to ensure the robot is stable before scanning.
  4. Pick Block: Ensures the object is successfully picked.
  5. Place Block: Defines the drop-off location.
    • Approach Parameter: Automatically creates a position 100mm above the object for approach and release.

Final Step: Click Apply.

You are now ready to run your very first AI-based bin picking application with Photoneo and the Omnicore platform. To learn all about its powers and benefits, along with real-world success stories, rewatch the webinar today! 

Recent blog posts

ABB Robotics & Photoneo Exclusive Webinar

Webinar: Mastering Complex Automation with 3D Vision-Guided Robotics