Create a job
post
https://api.playment.io
/v1/projects/:project_id/jobs
A JOB

Payload

1
{
2
"reference_id": "001",
3
"data": {
4
"sensor_data": {
5
"sensors": [
6
{
7
"sensor_id": "lidar",
8
"data_url": "https://xyz.s3.com/123.pcd",
9
"sensor_pose": {
10
"position": {
11
"x": 0,
12
"y": 0,
13
"z": 0
14
},
15
"heading": {
16
"w": 1,
17
"x": 0,
18
"y": 0,
19
"z": 0
20
}
21
}
22
},
23
{
24
"sensor_id": "18158562",
25
"data_url": "https://xyz.s3.com/123.jpg",
26
"sensor_pose": {
27
"position": {
28
"x": -0.81,
29
"y": 1.64,
30
"z": -1.52
31
},
32
"heading": {
33
"w": 0.68,
34
"x": 0.66,
35
"y": -0.21,
36
"z": 0.19
37
}
38
}
39
}
40
],
41
"ego_pose": {
42
"position": {
43
"x": 0,
44
"y": 0,
45
"z": 0
46
},
47
"heading": {
48
"w": 1,
49
"x": 0,
50
"y": 0,
51
"z": 0
52
}
53
},
54
"sensor_meta": [
55
{
56
"id": "lidar",
57
"name": "lidar",
58
"state": "editable",
59
"modality": "lidar",
60
"primary_view": true
61
},
62
{
63
"id": "18158562",
64
"name": "18158562",
65
"state": "editable",
66
"modality": "camera",
67
"camera_model": "brown_conrady",
68
"primary_view": false,
69
"intrinsics": {
70
"cx": 600,
71
"cy": 400,
72
"fx": 1200,
73
"fy": 800,
74
"k1": 0,
75
"k2": 0,
76
"k3": 0,
77
"k4": 0,
78
"p1": 0,
79
"p2": 0,
80
"skew": 0,
81
"scale_factor": 1
82
}
83
}
84
]
85
}
86
},
87
"work_flow_id": "2aae1234-acac-1234-eeff-12a22a237bbc"
88
}
Copied!
Key
Description
Type
data.sensor_data
Contains a sensors list, ego_pose object and sensor metadata list
Object
data.sensor_data.sensor_meta
Contains a list of all the sensors with each having metadata information like id : id of sensor name : name of sensor modality : lidar / camera
If the sensor is a camera, you can add the camera intrinsic values as well as the camera_model. These values are used along with the sensor_pose to create projections between sensors.
camera_model: one of brown_conrady or fisheye
If this key doesn't exist or is null, the tool will assume brown_conrady.
The intrinsics object contains the following keys:
cx: principal point x value cy: principal point y value fx: focal length in x-axis fy: focal length in y-axis k1, k2, k3, k4, k5, k6: Radial distortion coefficients p1, p2: Tangential distortion coefficients skew: camera skew coefficient scale_factor: The factor by which the image has been downscaled (For example, scale_factor will be 2 if the original image is twice as large as the downscaled image)
If the camera_model is brown_conrady then the distortion coefficients should be one of the following combinations:
    k1, k2, p1, p2
    k1, k2, p1, p2, k3
    k1, k2, p1, p2, k3, k4, k5, k6
If the camera_model is fisheye then the distortion coefficients should be the following combination:
    k1, k2, k3, k4
The remaining coefficients can be ignored or be assigned a value of 0
List
data.sensor_data.ego_pose
Contains the pose of a fixed point on the ego vehicle in the world frame of reference in the form of position (in (x, y, z)) and orientation (as quaternion (w, x, y, z))
In case the pose of the ego vehicle is available in the world frame of reference, The tool can allow annotators to mark objects as stationary and toggle APC (Aggregated point cloud) mode.
Usually, if a vehicle is equipped with an IMU or Odometry sensor, then it is possible to get the pose of the ego-vehicle in the world frame of reference.
Object
data.sensor_data.sensors
List of all the sensors associated with this particular frame with each having:
sensor_id : id of the sensor. This is a foreign key to the sensor id mentioned in the sensor_meta of the sequence data
data_url : A URL to the file containing the data captured from the sensor for this frame. In order to annotate lidar data, please share point clouds in ascii encoded PCD format.
sensor_pose : This key specifies the pose of respective sensors in a common frame of reference. If the ego_pose is available in the world frame of reference, then you should specify the sensor_pose of individual sensors in the same world frame of reference. In such cases, the pose might change in every frame, as the vehicle moves. If the ego_pose is not available, then all sensor_pose can be specified with respect to a fixed point on the vehicle. In such cases, the pose will not change between frames.
List
Please share point clouds in ascii encoded PCD format
1
# .PCD v0.7 - Point Cloud Data file format
2
VERSION 0.7
3
FIELDS x y z
4
SIZE 4 4 4
5
TYPE F F F
6
COUNT 1 1 1
7
WIDTH 47286
8
HEIGHT 1
9
VIEWPOINT 0 0 0 1 0 0 0
10
POINTS 47286
11
DATA ascii
12
5075.773 3756.887 107.923
13
5076.011 3756.876 107.865
14
5076.116 3756.826 107.844
15
5076.860 3756.975 107.648
16
5077.045 3756.954 107.605
17
5077.237 3756.937 107.559
18
5077.441 3756.924 107.511
19
5077.599 3756.902 107.474
20
5077.780 3756.885 107.432
21
5077.955 3756.862 107.391
22
...
Copied!

Visualizing intensity/reflectivity information

You can send additional data like Intensity or reflectivity values for each point in the PCD file. This can help annotators segment reflective surfaces like lane markings.
You can refer to a sample PCD structure below.
1
# .PCD v0.7 - Point Cloud Data file format
2
VERSION 0.7
3
FIELDS x y z intensity
4
SIZE 4 4 4 4
5
TYPE F F F F
6
COUNT 1 1 1 1
7
WIDTH 33345
8
HEIGHT 1
9
VIEWPOINT 0 0 0 1 0 0 0
10
POINTS 33345
11
DATA ascii
12
6.381713394829211 20.46125301261318 -2.413081058852035 0.02745098
13
5.28544895349773 16.913142630625543 -2.283128795562235 0.023529412
14
4.50899949078274 14.40011307442226 -2.1906859549297164 0.023529412
15
3.90763970412707 12.453374568477361 -2.108260110209615 0.023529412
16
2.9475040373348147 11.063306463789342 -2.0292308745898255 0.03137255
17
2.648762017271138 9.92654497521169 -1.991709336214386 0.03529412
18
3.4192604688595 9.368384486158 -1.99749275125155 0.03137255
19
...
Copied!
Once you share the data with the above format, annotators can utilize it to view point-cloud colors based on the intesity. It'll look like this:

Helper Python script to create jobs

1
import json
2
import requests
3
from copy import deepcopy
4
5
## Functions
6
def create_batch(BATCH_NAME):
7
base_url = f"https://api.playment.io/v1/projects/{PROJECT_ID}/batch"
8
DATA = {"name":BATCH_NAME}
9
response = requests.post(base_url, headers={'x-api-key': CLIENT_KEY}, json=DATA)
10
response_data = response.json()
11
if response.status_code >= 500:
12
raise Exception(f"Something went wrong at Playment's end {response.status_code}")
13
if 400 <= response.status_code < 500:
14
raise Exception(f"{response_data['error']['message']} {response.status_code}")
15
print(response_data)
16
return response_data
17
18
def Upload_jobs(DATA):
19
base_url = f"https://api.playment.io/v1/projects/{PROJECT_ID}/jobs"
20
response = requests.post(base_url, headers={'x-api-key': CLIENT_KEY}, json=DATA)
21
response_data = response.json()
22
if response.status_code >= 500:
23
raise Exception(f"Something went wrong at Playment's end {response.status_code}")
24
if 400 <= response.status_code < 500:
25
raise Exception(f"{response_data['error']['message']} {response.status_code}")
26
print(response_data)
27
return response_data
28
29
## Set project details
30
31
# Details for creating JOBS,
32
# PROJECT_ID ->> ID of project in which job needed to be created
33
# CLIENT_KEY ->> secret client key to create JOBS
34
# WORK_FLOW_ID ->> You can ask this from playment side
35
# BATCH_ID ->> The batch in which JOB needed to be created
36
37
PROJECT_ID = ''
38
CLIENT_KEY = ''
39
WORK_FLOW_ID = ''
40
BATCH_ID = ''
41
42
43
## Job creation payload structure
44
45
payload_data_structure ={
46
"reference_id": "", # reference_id is the unique reference id for the job
47
"data": {
48
"sensor_data": {
49
"sensors": [],
50
"ego_pose": { # ego_pose is the ego vehicle's pose
51
"position": { # ego vehicle's position
52
"x": 0,
53
"y": 0,
54
"z": 0
55
},
56
"heading": { # ego vehicle's heading in quaternion format
57
"w": 1,
58
"x": 0,
59
"y": 0,
60
"z": 0
61
}
62
},
63
"sensor_meta": [] # sensor_meta contains meta data for lidar and camera sensors
64
}
65
},
66
"work_flow_id": "", # this is the id of the workflow in which the jobs are to be created
67
"batch_id": "" # this is the batch_id of the batch in which jobs are to be created
68
69
}
70
71
sensor_data_structure = {
72
"sensor_id": "", # This is the sensor's id
73
"data_url": "", # sensor's data's URL
74
"sensor_pose": { # This are the sensor's extrinsic values
75
"position": { # sensor's position
76
"x": 0,
77
"y": 0,
78
"z": 0
79
},
80
"heading": { # sensor's heading in quaternion format
81
"w": 1,
82
"x": 0,
83
"y": 0,
84
"z": 0
85
}
86
}
87
}
88
89
lidar_sensor_meta_structure = { # lidar sensor's metadata. There can only be one lidar sesnor
90
"id": "", # lidar sensor's ID
91
"name": "", # should be kept same as lidar sensor's ID
92
"state": "editable", # should be kept as editable only
93
"modality": "lidar", # moality specifies the type of sensor [lidar/camera]
94
"primary_view": True # should be kept True for lidar sensor
95
}
96
97
camera_sensor_meta_structure = { # camer sensor's metadata. There can be multiple camera sensors
98
"id": "", # camera sensor's ID
99
"name": "", # should be kept same as camera sensor's ID
100
"state": "editable", # should be kept as editable only
101
"modality": "camera", # moality specifies the type of sensor [lidar/camera]
102
"camera_model": "brown_conrady", # specifies the model of camera
103
"primary_view": False, # should be kept as False for all camera sensors
104
"intrinsics": { # camera sensor's intrinsic values
105
"cx": 0,
106
"cy": 0,
107
"fx": 0,
108
"fy": 0,
109
"k1": 0,
110
"k2": 0,
111
"k3": 0,
112
"k4": 0,
113
"p1": 0,
114
"p2": 0,
115
"skew": 0,
116
"scale_factor": 1
117
}
118
}
119
120
if __name__ == "__main__":
121
122
## Set sensor id and data url (Only 1 lidar can be present per job)
123
124
# Here lidar's sensor_id is the key and pcd file for each job is in the list
125
lidar_data = [
126
"https://example.com/pcd_url_1",
127
"https://example.com/pcd_url_2"
128
]
129
130
131
## Set camera sensor ids and data urls (1 or more cameras can be present per job)
132
133
# Here camera's sensor_id is the key and image url for each job is in the list corresponding to that camera
134
camera_data = {
135
'camera_1': [
136
"https://example.com/image_url_1",
137
"https://example.com/image_url_2" ],
138
"camera_2": [
139
"https://example.com/image_url_3",
140
"https://example.com/image_url_4"
141
]
142
}
143
144
145
146
lidar_sensor_id = 'lidar' # set your lidar sensor id
147
148
149
# validate the number of urls are equal for each sensor
150
for camera_sensor_id, camer_sensor_urls in camera_data.items():
151
assert len(camer_sensor_urls) == len(lidar_data), f"Number of URLs are not equal for {camera_sensor_id}"
152
153
# get number of jobs
154
numer_of_jobs = len(lidar_data)
155
156
157
# create the jobs
158
for i in range(numer_of_jobs):
159
160
# define the payload for job creation
161
payload_data = deepcopy(payload_data_structure)
162
payload_data['reference_id'] = lidar_data[i].replace('.pcd', '') # you can set unique reference id as per your wish
163
payload_data['work_flow_id'] = WORK_FLOW_ID
164
payload_data['batch_id'] = BATCH_ID
165
166
# define lidar sensor meta and add it to payload data
167
lidar_sensor_meta = deepcopy(lidar_sensor_meta_structure)
168
lidar_sensor_meta['id'] = lidar_sensor_id
169
lidar_sensor_meta['name'] = lidar_sensor_id
170
payload_data['data']['sensor_data']['sensor_meta'].append(lidar_sensor_meta)
171
172
# define lidar sensor data and add it to payload data
173
lidar_sensor_data = deepcopy(sensor_data_structure)
174
lidar_sensor_data['sensor_id'] = lidar_sensor_id
175
lidar_sensor_data['data_url'] = lidar_data[i]
176
# set appropriate lidar position and heading, corrently defaulted it to (0,0,0),(1,0,0,0)
177
lidar_sensor_data['sensor_pose']['position']['x'] = 0
178
lidar_sensor_data['sensor_pose']['position']['y'] = 0
179
lidar_sensor_data['sensor_pose']['position']['z'] = 0
180
181
lidar_sensor_data['sensor_pose']['heading']['w'] = 1
182
lidar_sensor_data['sensor_pose']['heading']['x'] = 0
183
lidar_sensor_data['sensor_pose']['heading']['y'] = 0
184
lidar_sensor_data['sensor_pose']['heading']['z'] = 0
185
payload_data['data']['sensor_data']['sensors'].append(lidar_sensor_data)
186
187
for camera_sensor_id, camera_sensor_urls in camera_data.items():
188
# define camera sensor meta and add it to payload
189
camera_sensor_meta = deepcopy(camera_sensor_meta_structure)
190
camera_sensor_meta['id'] = camera_sensor_id
191
camera_sensor_meta['name'] = camera_sensor_id
192
payload_data['data']['sensor_data']['sensor_meta'].append(camera_sensor_meta)
193
194
# define camera sensor data and add it to payload data
195
camera_sensor_data = deepcopy(sensor_data_structure)
196
camera_sensor_data['sensor_id'] = camera_sensor_id
197
camera_sensor_data['data_url'] = camera_sensor_urls[i]
198
# set appropriate camera position and heading, corrently defaulted it to (0,0,0),(1,0,0,0)
199
camera_sensor_data['sensor_pose']['position']['x'] = 0
200
camera_sensor_data['sensor_pose']['position']['y'] = 0
201
camera_sensor_data['sensor_pose']['position']['z'] = 0
202
203
camera_sensor_data['sensor_pose']['heading']['w'] = 1
204
camera_sensor_data['sensor_pose']['heading']['x'] = 0
205
camera_sensor_data['sensor_pose']['heading']['y'] = 0
206
camera_sensor_data['sensor_pose']['heading']['z'] = 0
207
208
payload_data['data']['sensor_data']['sensors'].append(camera_sensor_data)
209
210
#print(json.dumps(payload_data))
211
Upload_jobs(payload_data)
Copied!
Last modified 1mo ago