our/place

Profile Ottomated

Timelapse

Downloads

This download contains snapshots taken every 10 seconds, a total of 8,479 files. You could use these to create a heatmap or other cool analyses :)

The timelapse was generated by running  
ffmpeg -framerate 60 -pattern_type glob -i 'mosaic_images/*.png' -vf "pad=width=1024:height=576:x=0:y=0:color=white" -s 2048x1152 -sws_flags neighbor -c:v libx264 -pix_fmt yuv420p out.mp4
.

Statistics

  • 121,187 users signed in
  • 99,125 Truffle users
  • 379,053 site visits
  • ~45,000 peak concurrent users
  • ~8,000 peak pixels placed per second
  • I Forgot to collect other stats

Technical Details

The frontend was built with Svelte and deployed on Vercel using Edge Functions. I created the canvas functionality from scratch, which was probably a bad decision. Users were authenticated with Firebase for ease-of-use, but Firebase wasn't used for any other part of the project.

Admin Control Panel

Live updates were achieved with a websocket connection using a custom binary serialization format with Bebop. Every 500ms, the server sends a delta of every pixel that's been updated since the last delta, and every minute it sends a full re-sync of the entire canvas. This functionality allowed me to change the canvas size, color palette, etc. at any point.

The backend was written in Rust with tokio-tungstenite and deployed on AWS ECS using CDK. I didn't use any external databases like Redis, instead choosing to create my own central backend service that stores the entire canvas in memory (backing it up and automatically restoring from S3 on an interval).

Technical Layout

The backend communicates with a number of websocket gateways using tarpc over HTTP. The number could be auto-scaled, or manually scaled up and down to 100 or more. This event had 24 to start, which maxed out at 36,000 CCU before crashing and requiring an upscale of both specs and gateway count. Luckily, this went smoothly and only took about 2 minutes to be restored without any data loss.

Each websocket gateway is multi-threaded and can handle at least 2000 connections. This means that at this architecture could scale up to several hundred thousand concurrent users, probably more with optimization, and since it would be used for short-term events, the pricing isn't too bad.

There's a lot of potential for future additions. For example, Truffle integration took me an hour or two and required pretty deep changes within the frontend and backend to make it work.

FAQ

Can I use this for my own content?

  • If you're already friends with Ludwig, maybe. Reach out on Twitter.
  • Will this be open-sourced?

  • No, never. I learned from CrewLink. If you want to see the source code, watch my stream.
  • How much did this cost to run?

  • A couple hundred dollars on AWS. It would have been less if I had realized that CloudWatch charges an arm and a leg.