Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

*.npy dataset preprocess procedure #1

Open
2 of 3 tasks
Durant35 opened this issue Apr 12, 2018 · 7 comments
Open
2 of 3 tasks

*.npy dataset preprocess procedure #1

Durant35 opened this issue Apr 12, 2018 · 7 comments
Labels
Module: src/nodes ros nodes Type: doc Good for newcomers

Comments

@Durant35
Copy link
Member

Durant35 commented Apr 12, 2018

@Durant35
Copy link
Member Author

Durant35 commented Apr 13, 2018

点云信息

image

[0, mc.NUM_CLASS-1] 个类标签

image

  • Car:1
  • Pedestrian:2
  • Cyclist:3

可视化 AutoLidarPerception/kitti_ros#15

@Durant35
Copy link
Member Author

LiDAR硬件特性

image

@Durant35 Durant35 added Type: doc Good for newcomers Module: src/nodes ros nodes labels Apr 19, 2018
@Durant35 Durant35 changed the title *.npy dataset preprocess *.npy dataset preprocess procedure Apr 23, 2018
@gyubeomim
Copy link

awesome work!

it's working well on my environment settings

why don't you do pull request to make it public? :-)

@Durant35
Copy link
Member Author

Durant35 commented Apr 23, 2018

@tigerk0430 Thanks for your reply, but still under bug-fix step, stay tuned ;-).

@Lapayo
Copy link

Lapayo commented May 21, 2019

Hey, I tried using your preprocessing - however my data seems to have a lot more artifacts than the original preprocessing. Am I doing anything wrong or are the preprocessings different?

Edit: It looks like BichenWuUCB#37

@Durant35
Copy link
Member Author

Code from BichenWuUCB#37 (comment)

def lidar_to_2d_front_view_3(points, v_res=26.9/64,
h_res=0.17578125
# h_res=0.08
):

x_lidar = points[:, 0]  # -71~73
y_lidar = points[:, 1]  # -21~53
z_lidar = points[:, 2]  # -5~2.6
r_lidar = points[:, 3]  # Reflectance  0~0.99

# Distance relative to origin
d = np.sqrt(x_lidar ** 2 + y_lidar ** 2 + z_lidar ** 2)

# Convert res to Radians
v_res_rad = np.radians(v_res)
h_res_rad = np.radians(h_res)

# PROJECT INTO IMAGE COORDINATES
# 这里的负号去掉后,图片会左右颠倒,但是为什么之前是反的?
# -1024~1024   -3.14~3.14  ;
x_img_2 = np.arctan2(-y_lidar, x_lidar)#  得到水平角度
# 用它求得的值域只有上面的一半?因为r始终是正数,导致反面的和正面的投影都在一起了
# x_img_2 = -np.arcsin(y_lidar/r)  # 水平转角  -1.57~1.57

angle_diff = np.abs(np.diff(x_img_2))
threshold_angle = np.radians(250)  #
angle_diff = np.hstack((angle_diff, 0.001)) # 补一个元素,diff少了一个
angle_diff_mask = angle_diff > threshold_angle
# print('angle_diff_mask',np.sum(angle_diff_mask), threshold_angle)


x_img = np.floor((x_img_2 / h_res_rad)).astype(int)  # 把角度转换为像素坐标
x_img -= np.min(x_img)  # 把坐标为负数的部分做一个转移
# x_img[x_lidar < 0] = 0  # 只取x大于0的部分,因为小于0的部分相当于是雷达后面的视角
# 不是我们需要的数据,并且arcsin 计算会重复;


# -52~10  -0.4137~0.078
# y_img_2 = -np.arctan2(z_lidar, r) #
# 这个值域没有变化,但是需要加一个负号图像才是正的,不然上下颠倒

# y_img_2 = -np.arcsin(z_lidar/d) # 得到垂直方向角度
# y_img = np.round((y_img_2 / v_res_rad)).astype(int)  # # 把角度转换为像素坐标
# y_img -= np.min(y_img) # 把坐标为负数的部分做一个转移
# y_img[y_img >= 64] = 63 # 有可能会超出64根线,需要做一点限制



y_img[y_img >= 64] = 63 # 有可能会超出64根线,需要做一点限制


x_max = int(360.0 / h_res) + 1  # 投影后图片的宽度
# x_max = int(180.0 / h_res) + 1  # 投影后图片的宽度

# 根据论文的5维特征赋值
depth_map = np.zeros((64, x_max, 5))#+255
depth_map[y_img, x_img, 0] = x_lidar
depth_map[y_img, x_img, 1] = y_lidar
depth_map[y_img, x_img, 2] = z_lidar
depth_map[y_img, x_img, 3] = r_lidar
depth_map[y_img, x_img, 4] = d

# 抽取最中间的90度视角数据,也就是512个像素的宽度,64高度的数据
start_index = int(x_max/2 - 256)
result = depth_map[:, start_index:(start_index+512), :]

np.save(os.path.join('../data/samples/0001-3' + '.npy'), result)

print('write 0001-2')`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Module: src/nodes ros nodes Type: doc Good for newcomers
Projects
None yet
Development

No branches or pull requests

3 participants