This was a side project that i started in December 2018, however the start of the year was a very busy time for me and this went on the backlog for a later date.
So what is Pixel Perfect?
Pixel Perfect is an app for iOS and Android that allows the user to take a picture of their surroundings and then that is converted to a little miniature pixel world based on the picture.
What i learnt from previous development?
- Start with a detailed plan;
I had a plan on how i wanted to covert the world from a picture but i didn’t think enough about the specifics of that - Pixels;
Previously i was converting into a game world using the GetPixel function, which was fine until i realised the quality of pictures most cameras are producing.
This proved to be a massive overhead for personal computers, let alone mobile. - Terrain Generation;
Previously i was spawning prefabs for the terrain and then editing that with little direction apart from a small set of rules. This time i’m overhauling the terrain generation to be a separate system which uses height maps and marching cubes algorithms. - Optimisation;
Optimisation of models and of systems so that they can run effectively on mobile devices. My previous approach was to make it for PC first, and that was a mistake.
What do i need to make pixel perfect?
Systems:
- Camera functionality on phone
- Re-sampling of picture
- Node based grid system
- Height map
- Marching cubes
- Generation Rules
- Assets
- UI
- Polish
Where are we at currently?
Camera functionality on phone:
Status: Completed
Unity has a class called WebCamdevices which allows us to stream the data from the camera to a texture. So that’s exactly what I’ve done.
The great part about this function is we can stop the stream at anytime and then capture the pixels from the last stream point, thus taking the “picture” we just need to check in a few effects to polish it up (ie. shutter sound, camera click)
Re-sampling of picture:
Status: Completed
This one was an adventure, I spent so long running around in circles trying to find what i exactly wanted. At first i thought it was as simple as using the resize function but that actually just crops the picture.
Then trying to find direction online was difficult as the correct question had to be asked to give me what i was looking for.
This is what i ended up with:
We create a new Rect, Scale it with the function, then create a new texture, resize that texture to the new width & height and then read the pixels from screen into the saved texture data.
And this is the function used to scale the texture.
Now we have the basis for which our Node grid can be built from and start are generation.
Node based grid system:
Status: Completed
Before we start creating a grid system we need to establish what information the grid will hold in each tile.
For this we create a serializable class containing its:
- Position
- Type of Tile
- a List of its neighbours
- Constructor
We then for each pixel in the picture that the user has taken
Generate a tile, which takes in the position in the array and instantiates a cube at the same position as the pixel with the same colour.
Data for that tile is then passed into the constructor in the tile data grid.
Sure there may be some more information that needs to be added to each tile but for the moment i think this will suffice.
Height map system:
Status: In Progress