Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

voxel_metrics produces too many voxels? #439

Closed
zoeschindler opened this issue Jun 8, 2021 · 3 comments
Closed

voxel_metrics produces too many voxels? #439

zoeschindler opened this issue Jun 8, 2021 · 3 comments
Assignees
Labels
Bug A bug in the package

Comments

@zoeschindler
Copy link

zoeschindler commented Jun 8, 2021

Hello,

I am having trouble again with voxel_metrics(). For simplicity, the same dummy data again. The function creates too many voxels and those have odd values? Or maybe I used it wrong, I am not too sure.

I first read in the data and cut it to 2m height:

las <- readTLSLAS("area_4_norm_1.las")
las <- filter_poi(las, Z <= 2)
extent(las)
#>  class      : Extent 
#>  xmin       : 446835 
#>  xmax       : 446850 
#>  ymin       : 5379675 
#>  ymax       : 5379690 

Then, I calculated 10 cm voxels:

voxels <- voxel_metrics(las, length(X), res=0.1, all_voxels=TRUE)
nrow(voxels)
#> [1] 477740

However, if I understand the function correctly, there should be 150 * 150 * 21 = 472500 voxels?
When I look at the unique values of the coordinates, there are too many:

length(unique(voxels$X)) # should be 150 -> no (210)
length(unique(voxels$Y)) # should be 150 -> no (210)
length(unique(voxels$Z)) # should be 21 -> no (28)

I then rounded the coordinates from [m] to [cm] by multiplying with 100 and then transformed them to integers to avoid tiny decimal playes to have an effect. Then, the amount of unique values is right.

new_x <- as.integer(round(voxels$X, 2)*100)
new_y <- as.integer(round(voxels$Y, 2)*100)
new_z <- as.integer(round(voxels$Z, 2)*100)
length(unique(new_x)) # should be 150 -> yes
length(unique(new_y)) # should be 150 -> yes
length(unique(new_z)) # should be 21 -> yes

However, looking at how often each value occurs, they are unevenly spread:

summary(as.factor(new_x))
#> 44683735 44684285 44683715 44684065 44683725 44683705 44683985 44684415 44683785 44684035 44683815 3329     3257     3256     3256     3244     3240     3235     3230     3227     3227     3226 
#> ... ( I think it's enough to show the first few)

Are there actually too many voxels returned or am I understanding wrong?
And is there away around the odd voxel coordinates? I tried to circumvent this by transforming them into integers storing cm values, but this did not work out.
Sorry for the long-ish post.

@Jean-Romain
Copy link
Collaborator

diff(sort(unique(voxels$X)))

Is interesting because it returns some 0.1 and some 5.8e-11 which explains why I did not noticed the problem. I think you caught the error. I confirm there is something. I'm sorry my previous fix may have introduced another problem. I'm currently in vacations so I'm fixing stuff a bit too quickly without advanced checks. I'll check that one asap.

@Jean-Romain Jean-Romain self-assigned this Jun 9, 2021
@Jean-Romain Jean-Romain added the Bug A bug in the package label Jun 9, 2021
@Jean-Romain
Copy link
Collaborator

Fixed. It was a floating point accuracy issue. For example a voxels at X = 446835.15 was actually at 446835.14999999996508 because 446835.15 is not representable in a 64 bits double. However the same X = 446835.15 was computed at 46835.15000000002328 for supplementary voxels with no points. So the algorithm did not recognized they were the same X. Indeed they were different. The two are now rounded the same way so they are expected to be always the same.

@zoeschindler
Copy link
Author

Sorry for disturbing your vacation! And thank you very much for fixing this :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug A bug in the package
Projects
None yet
Development

No branches or pull requests

2 participants