I'd like to address the plausibility of Euclideon Unlimited Detail in a rational manner before trying to compare it outright as something that
already exists or has too many disadvantages to work well. Below I'll state my case and explain as best as possible.
First and foremost, it is a variation of a Sparse Voxel Octree engine. However, this being said, that does not actually mean it is a Sparse Voxel
Engine completely. While I personally have not had a hands on experience with this technology, I have actually seen enough of the explanation from Mr.
Dell that I can surmise what he's actually doing that is different, and it is indeed extremely clever.
In regard to the standard model format or limitation thereof in using Sparse Voxel Octree methodologies, I am fairly certain Mr. Dell is telling the
truth when he says those limits do not apply with what he is doing. What this comes down to is the methodology of access to that data, coupled with
how that data is represented in screen space.
By technicality, a sparse voxel octree system is unlimited detail by default. The limitation is available memory in conjunction with the resolution of
the screen space. This is a known advantage/disadvantage to that system, so Euclideon making the claim of 'Unlimited Detail" does not immediately
flag it as a scam. He's simply stating the blatantly obvious for those who already know Sparse Voxel Octree systems.
Where we find a bit of unconventional claim is in the idea that Euclideon has solved the fundamental problem of the computational requirement for
large resolutions, in conjunction with file size constraints which normally would make those point cloud data representations gargantuan in memory as
they are further refined. But again, the clue is obvious as to how he has gone about solving this problem on the surface, in that he likens it to a
search engine algorithm for optimizing what needs to be dealt with.
Normally in a Sparse Voxel Octree, it depends on branching and algorithmically subdividing to get the LOD we're looking for. In relation to high
definition detail, that can be expensive as the algorithm continues trying to resolve further detail. However, I'd like to point out that this
problem exists only if you are forced to traverse the hierarchy of the voxel algorithm in a linear manner, much like procedural textures resolving
detail.
In light of this, we must ask what, then, is going on to circumvent these limitations?
The clue is when he mentions that his system is more like a search engine. What this implies is that he has stored the point cloud data in a manner by
which individual points are indexed within the file itself and the inside of the file is searchable without loading the entire file to do so.
The best explanation I can give about this is the following analogy:
Let's say you have all of Wikipedia. Clearly, the amount of data required to show all of it at once is astronomical, and clearly you could not show
the entirety of Wikipedia on your computer screen at the same time. This is why Wikipedia has a search box, where you type in the query and retrieve
and individual page from within the mountains of data contained within.
Because Wikipedia is indexed, it does not have to start on page 1 of Wikipedia and scan the billions of pages it contains in a linear manner to get to
what you were looking for. Instead, the indexing algorithm knows to skip most of Wikipedia and only go straight to the search criteria matches for
that circumstance. This is also the reason why it doesn't take your entire lunch hour to submit a search to Google before you get a result.
Now, imagine all of Wikipedia was named Wikipedia.3DS
It remains internally searchable, and a majority of that file (regardless of size) is immediately not relevant to the search criteria, and so a
majority of the data in Wikipedia.3DS is ignored, except the point data you just searched for. Likewise, you never had to load (or resolve) all of
Wikipedia.3DS before you could bother searching inside of it, no more than Google has to load the entire Internet to find what you're looking for.
I refer to this technique as Fractional File Indexing.
As for the file size itself for those searchable point cloud models, the point cloud data itself would be incredibly tiny to begin with because it's
an algorithmic approach. That is another stated benefit to Sparse Voxel Octree systems in that they are ridiculously efficient. What Euclideon seems
to have innovated is the ability to rectify the steps within the algorithmic LOD to skip a majority of the computations in linear and go straight to
the LOD branch they need at that moment, and since screen space & memory are the limit for detail, this method likely just freed up 75% of the
computation time normally used for mundane recursion.
In regards to animation of Sparse Voxel Octree, it is actually possible as shown here:
www.youtube.com...