Skip to content

Commit

Permalink
Samples: Automatic updates to public repository
Browse files Browse the repository at this point in the history
Remember to do the following:
    1. Ensure that modified/deleted/new files are correct
    2. Make this commit message relevant for the changes
    3. Force push
    4. Delete branch after PR is merged

If this commit is an update from one SDK version to another,
make sure to create a release tag for previous version.
  • Loading branch information
csu-bot-zivid committed Jul 12, 2024
1 parent 19cae6b commit 8d28b4d
Show file tree
Hide file tree
Showing 35 changed files with 369 additions and 519 deletions.
7 changes: 2 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Python samples

This repository contains python code samples for Zivid SDK v2.12.0. For
This repository contains python code samples for Zivid SDK v2.11.1. For
tested compatibility with earlier SDK versions, please check out
[accompanying
releases](https://github.com/zivid/zivid-python-samples/tree/master/../../releases).
Expand Down Expand Up @@ -122,7 +122,7 @@ from the camera can be used.
- [robodk\_hand\_eye\_calibration](https://github.com/zivid/zivid-python-samples/tree/master/source/applications/advanced/hand_eye_calibration/robodk_hand_eye_calibration/robodk_hand_eye_calibration.py) - Generate a dataset and perform hand-eye calibration
using the Robodk interface.
- [utilize\_hand\_eye\_calibration](https://github.com/zivid/zivid-python-samples/tree/master/source/applications/advanced/hand_eye_calibration/utilize_hand_eye_calibration.py) - Transform single data point or entire point cloud from
camera to robot base reference frame using Hand-Eye
camera frame to robot base frame using Hand-Eye
calibration
- [verify\_hand\_eye\_with\_visualization](https://github.com/zivid/zivid-python-samples/tree/master/source/applications/advanced/hand_eye_calibration/verify_hand_eye_with_visualization.py) - Verify hand-eye calibration by transforming all
dataset point clouds and
Expand All @@ -138,9 +138,6 @@ from the camera can be used.
- [white\_balance\_calibration](https://github.com/zivid/zivid-python-samples/tree/master/source/sample_utils/white_balance_calibration.py) - Balance color for 2D capture using white surface as reference.
- **applications**
- **advanced**
- **robot\_guidance**
- [robodk\_robot\_guidance](https://github.com/zivid/zivid-python-samples/tree/master/source/applications/advanced/robot_guidance/robodk_robot_guidance.py) - Guide the robot to follow a path on the Zivid
Calibration Board.
- **verify\_hand\_eye\_calibration**
- [robodk\_verify\_hand\_eye\_calibration](https://github.com/zivid/zivid-python-samples/tree/master/source/applications/advanced/verify_hand_eye_calibration/robodk_verify_hand_eye_calibration.py) - Perform a touch test with a robot to verify Hand-Eye
Calibration using the RoboDK interface.
Expand Down
3 changes: 2 additions & 1 deletion continuous-integration/setup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,8 @@ function install_www_deb {
rm -r $TMP_DIR || exit
}

install_www_deb "https://downloads.zivid.com/sdk/releases/2.12.0+6afd4961-1/u${VERSION_ID:0:2}/zivid_2.12.0+6afd4961-1_amd64.deb" || exit
install_www_deb "https://downloads.zivid.com/sdk/releases/2.11.1+de9b5dae-1/u${VERSION_ID:0:2}/zivid-telicam-driver_3.0.1.1-3_amd64.deb" || exit
install_www_deb "https://downloads.zivid.com/sdk/releases/2.11.1+de9b5dae-1/u${VERSION_ID:0:2}/zivid_2.11.1+de9b5dae-1_amd64.deb" || exit

python3 -m pip install --upgrade pip || exit
python3 -m pip install --requirement "$ROOT_DIR/requirements.txt" || exit
Expand Down
17 changes: 17 additions & 0 deletions pyproject.constraints
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Generated file, do not edit @ 2023-11-20 07:12:00
#
# Command:
# python infrastructure/tools/launchers/dependency_updater.py --max-processes=30
#
# To update all Python requirements and constraints manually, you should run:
#
# python infrastructure/tools/launchers/dependency_updater.py
#
# This file is autogenerated by pip-compile with Python 3.11
# by the following command:
#
# pip-compile --all-extras --extra-index-url=http://se-ci-elastic-server-1.localdomain:8081/artifactory/api/pypi/zivid-pypi/simple --output-file=sdk/samples/public/python/pyproject.constraints --strip-extras --trusted-host=se-ci-elastic-server-1.localdomain sdk/samples/public/python/pyproject.toml
#
--extra-index-url http://se-ci-elastic-server-1.localdomain:8081/artifactory/api/pypi/zivid-pypi/simple
--trusted-host se-ci-elastic-server-1.localdomain

57 changes: 48 additions & 9 deletions source/applications/advanced/auto_2d_settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,6 @@
first. If you want to use your own white reference (white wall, piece of paper, etc.) instead of using the calibration
board, you can provide your own mask in _main(). Then you will have to specify the lower limit for f-number yourself.
Note: This example uses experimental SDK features, which may be modified, moved, or deleted in the future without notice.
"""

import argparse
Expand All @@ -29,7 +27,6 @@
import matplotlib.pyplot as plt
import numpy as np
import zivid
import zivid.experimental.calibration
from sample_utils.calibration_board_utils import find_white_mask_from_checkerboard
from sample_utils.white_balance_calibration import compute_mean_rgb_from_mask, white_balance_calibration

Expand Down Expand Up @@ -150,12 +147,12 @@ def _find_white_mask_and_distance_to_checkerboard(camera: zivid.Camera) -> Tuple
"""
try:
settings = _capture_assistant_settings(camera)
frame = camera.capture(settings)
point_cloud = camera.capture(settings).point_cloud()

checkerboard_pose = zivid.experimental.calibration.detect_feature_points(frame).pose().to_matrix()
checkerboard_pose = zivid.calibration.detect_feature_points(point_cloud).pose().to_matrix()
distance_to_checkerboard = checkerboard_pose[2, 3]

rgb = frame.point_cloud().copy_data("rgba")[:, :, :3]
rgb = point_cloud.copy_data("rgba")[:, :, :3]
white_squares_mask = find_white_mask_from_checkerboard(rgb)
except RuntimeError as exc:
raise RuntimeError("Unable to find checkerboard, make sure it is in view of the camera.") from exc
Expand All @@ -178,7 +175,37 @@ def _find_lowest_acceptable_fnum(camera: zivid.Camera, image_distance_near: floa
Lowest acceptable f-number that gives a focused image
"""
if camera.info.model == zivid.CameraInfo.Model.zividTwo:
if camera.info.model == zivid.CameraInfo.Model.zividOnePlusSmall:
focus_distance = 500
focal_length = 16
circle_of_confusion = 0.015
fnum_min = 1.4
if image_distance_near < 300 or image_distance_far > 1000:
print(
f"WARNING: Closest imaging distance ({image_distance_near:.2f}) or farthest imaging distance"
f"({image_distance_far:.2f}) is outside recommended working distance for camera [300, 1000]"
)
elif camera.info.model == zivid.CameraInfo.Model.zividOnePlusMedium:
focus_distance = 1000
focal_length = 16
circle_of_confusion = 0.015
fnum_min = 1.4
if image_distance_near < 500 or image_distance_far > 2000:
print(
f"WARNING: Closest imaging distance ({image_distance_near:.2f}) or farthest imaging distance"
f"({image_distance_far:.2f}) is outside recommended working distance for camera [500, 2000]"
)
elif camera.info.model == zivid.CameraInfo.Model.zividOnePlusLarge:
focus_distance = 1800
focal_length = 16
circle_of_confusion = 0.015
fnum_min = 1.4
if image_distance_near < 1200 or image_distance_far > 3000:
print(
f"WARNING: Closest imaging distance ({image_distance_near:.2f}) or farthest imaging distance"
f"({image_distance_far:.2f}) is outside recommended working distance for camera [1200, 3000]"
)
elif camera.info.model == zivid.CameraInfo.Model.zividTwo:
focus_distance = 700
focal_length = 8
circle_of_confusion = 0.015
Expand Down Expand Up @@ -261,7 +288,13 @@ def _find_lowest_exposure_time(camera: zivid.Camera) -> float:
Lowest exposure time [us] for given camera
"""
if camera.info.model == zivid.CameraInfo.Model.zividTwo:
if camera.info.model == zivid.CameraInfo.Model.zividOnePlusSmall:
exposure_time = 6500
elif camera.info.model == zivid.CameraInfo.Model.zividOnePlusMedium:
exposure_time = 6500
elif camera.info.model == zivid.CameraInfo.Model.zividOnePlusLarge:
exposure_time = 6500
elif camera.info.model == zivid.CameraInfo.Model.zividTwo:
exposure_time = 1677
elif camera.info.model == zivid.CameraInfo.Model.zividTwoL100:
exposure_time = 1677
Expand Down Expand Up @@ -290,7 +323,13 @@ def _find_max_brightness(camera: zivid.Camera) -> float:
Highest projector brightness value for given camera
"""
if camera.info.model == zivid.CameraInfo.Model.zividTwo:
if camera.info.model == zivid.CameraInfo.Model.zividOnePlusSmall:
brightness = 1.8
elif camera.info.model == zivid.CameraInfo.Model.zividOnePlusMedium:
brightness = 1.8
elif camera.info.model == zivid.CameraInfo.Model.zividOnePlusLarge:
brightness = 1.8
elif camera.info.model == zivid.CameraInfo.Model.zividTwo:
brightness = 1.8
elif camera.info.model == zivid.CameraInfo.Model.zividTwoL100:
brightness = 1.8
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,13 @@
The checkerboard point cloud is also visualized with a coordinate system.
The ZDF file for this sample can be found under the main instructions for Zivid samples.
Note: This example uses experimental SDK features, which may be modified, moved, or deleted in the future without notice.
"""

from pathlib import Path

import numpy as np
import open3d as o3d
import zivid
import zivid.experimental.calibration
from sample_utils.paths import get_sample_data_path
from sample_utils.save_load_matrix import assert_affine_matrix_and_save

Expand Down Expand Up @@ -75,9 +72,7 @@ def _main() -> None:
point_cloud = frame.point_cloud()

print("Detecting checkerboard and estimating its pose in camera frame")
transform_camera_to_checkerboard = (
zivid.experimental.calibration.detect_feature_points(frame).pose().to_matrix()
)
transform_camera_to_checkerboard = zivid.calibration.detect_feature_points(point_cloud).pose().to_matrix()
print(f"Camera pose in checkerboard frame:\n{transform_camera_to_checkerboard}")

transform_file_name = "CameraToCheckerboardTransform.yaml"
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
"""
Perform Hand-Eye calibration.
Note: This example uses experimental SDK features, which may be modified, moved, or deleted in the future without notice.
"""

import datetime
Expand All @@ -11,7 +9,6 @@

import numpy as np
import zivid
import zivid.experimental.calibration
from sample_utils.save_load_matrix import assert_affine_matrix_and_save


Expand Down Expand Up @@ -49,12 +46,10 @@ def _perform_calibration(hand_eye_input: List[zivid.calibration.HandEyeInput]) -
calibration_type = input("Enter type of calibration, eth (for eye-to-hand) or eih (for eye-in-hand):").strip()
if calibration_type.lower() == "eth":
print("Performing eye-to-hand calibration")
print("The resulting transform is the camera pose in robot base frame")
hand_eye_output = zivid.calibration.calibrate_eye_to_hand(hand_eye_input)
return hand_eye_output
if calibration_type.lower() == "eih":
print("Performing eye-in-hand calibration")
print("The resulting transform is the camera pose in flange (end-effector) frame")
hand_eye_output = zivid.calibration.calibrate_eye_in_hand(hand_eye_input)
return hand_eye_output
print(f"Unknown calibration type: '{calibration_type}'")
Expand Down Expand Up @@ -88,12 +83,6 @@ def _main() -> None:
hand_eye_input = []
calibrate = False

print(
"Zivid primarily operates with a (4x4) transformation matrix. To convert\n"
"from axis-angle, rotation vector, roll-pitch-yaw, or quaternion, check out\n"
"our pose_conversions sample."
)

while not calibrate:
command = input("Enter command, p (to add robot pose) or c (to perform calibration):").strip()
if command == "p":
Expand All @@ -103,7 +92,7 @@ def _main() -> None:
frame = _assisted_capture(camera)

print("Detecting checkerboard in point cloud")
detection_result = zivid.experimental.calibration.detect_feature_points(frame)
detection_result = zivid.calibration.detect_feature_points(frame.point_cloud())

if detection_result.valid():
print("Calibration board detected")
Expand All @@ -125,12 +114,6 @@ def _main() -> None:
transform_file_path = Path(Path(__file__).parent / "transform.yaml")
assert_affine_matrix_and_save(transform, transform_file_path)

print(
"Zivid primarily operates with a (4x4) transformation matrix. To convert\n"
"to axis-angle, rotation vector, roll-pitch-yaw, or quaternion, check out\n"
"our pose_conversions sample."
)

if calibration_result.valid():
print("Hand-Eye calibration OK")
print(f"Result:\n{calibration_result}")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,6 @@
https://support.zivid.com/latest/academy/applications/hand-eye/hand-eye-calibration-process.html
Make sure to launch your RDK file and connect to robot through Robodk before running this script.
Note: This example uses experimental SDK features, which may be modified, moved, or deleted in the future without notice.
"""

import argparse
Expand All @@ -27,7 +25,6 @@
import cv2
import numpy as np
import zivid
import zivid.experimental.calibration
from robodk.robolink import Item
from sample_utils.robodk_tools import connect_to_robot, get_robot_targets, set_robot_speed_and_acceleration
from sample_utils.save_load_matrix import assert_affine_matrix_and_save, load_and_assert_affine_matrix
Expand Down Expand Up @@ -98,7 +95,7 @@ def _verify_good_capture(frame: zivid.Frame) -> None:
RuntimeError: If no feature points are detected in frame
"""
detected_features = zivid.experimental.calibration.detect_feature_points(frame)
detected_features = zivid.calibration.detect_feature_points(frame.point_cloud())
if not detected_features.valid():
raise RuntimeError("Failed to detect feature points from captured frame.")

Expand Down Expand Up @@ -237,8 +234,8 @@ def perform_hand_eye_calibration(

if frame_file_path.is_file() and pose_file_path.is_file():
print(f"Detect feature points from img{pose_and_image_iterator:02d}.zdf")
frame = zivid.Frame(frame_file_path)
detected_features = zivid.experimental.calibration.detect_feature_points(frame)
point_cloud = zivid.Frame(frame_file_path).point_cloud()
detected_features = zivid.calibration.detect_feature_points(point_cloud)
print(f"Read robot pose from pos{pose_and_image_iterator:02d}.yaml")
transform = load_and_assert_affine_matrix(pose_file_path)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,6 @@
Further explanation of this sample is found in our knowledge base:
https://support.zivid.com/latest/academy/applications/hand-eye/ur5-robot-%2B-python-generate-dataset-and-perform-hand-eye-calibration.html
Note: This example uses experimental SDK features, which may be modified, moved, or deleted in the future without notice.
"""

import argparse
Expand All @@ -28,7 +26,6 @@
import cv2
import numpy as np
import zivid
import zivid.experimental.calibration
from rtde import rtde, rtde_config
from sample_utils.save_load_matrix import assert_affine_matrix_and_save, load_and_assert_affine_matrix
from scipy.spatial.transform import Rotation
Expand Down Expand Up @@ -269,7 +266,8 @@ def _verify_good_capture(frame: zivid.Frame) -> None:
RuntimeError: If no feature points are detected in frame
"""
detection_result = zivid.experimental.calibration.detect_feature_points(frame)
point_cloud = frame.point_cloud()
detection_result = zivid.calibration.detect_feature_points(point_cloud)

if not detection_result.valid():
raise RuntimeError("Failed to detect feature points from captured frame.")
Expand Down Expand Up @@ -392,8 +390,8 @@ def perform_hand_eye_calibration(

if frame_file_path.is_file() and pose_file_path.is_file():
print(f"Detect feature points from img{idata:02d}.zdf")
frame = zivid.Frame(frame_file_path)
detection_result = zivid.experimental.calibration.detect_feature_points(frame)
point_cloud = zivid.Frame(frame_file_path).point_cloud()
detection_result = zivid.calibration.detect_feature_points(point_cloud)

if not detection_result.valid():
raise RuntimeError(f"Failed to detect feature points from frame {frame_file_path}")
Expand Down
Loading

0 comments on commit 8d28b4d

Please sign in to comment.