7 minutes
Parchment DevBlog #1
Hello all! I’m considering this the first official project update for Parchment - and there’s a lot to update since my last technical blog in October. We’ve got some new art to show, team growth to highlight, and some goals to share. I hope to make a monthly post to share the progress made, even if I’m sending this out to the void with very little readership. <3
Shiny new icons
First up, I’ve hired an artist to do some freelance work for the game, starting with our resource icons in the game:
![]()
Credit: Zev
Breaking down these icons, we see from left to right…
- Wood: Abundant early game resource for construction
- Food: Abundant Early game resource for units and research
- Stone: Early-Mid game resource for construction
- Gold: Mid-game resource that has wide usage
- Obsidian: Rare late-game resource for special upgrades
- Aether: Rare late-game resource for magical upgrades
I wonder where in the generated world one can find all of these resources…
Solo -> Duo
Great news! Another developer (and friend) has decided to jump in and help here and there! Meet Cheeselog, the second official team member! Although technical in other ways, he is an aspirant when it comes to game development so I’ve taken him on as a junior developer in a learning capacity. He’s already fixed one bug! Feel free to stop by our Discord Server and say hi.
The team growing from one to 1+ means something significant. We’re essentially a non-funded startup, and we’ll have to come up with a studio name. Exciting times!
December’s Goal
Our goal for December is to launch a test realm in the cloud. Each realm will have it’s own name, similar to that of MMO game servers. This one we’re calling Hightower The naming scheme for the Realm will be taken from a Dungeons & Dragons table I am the DM for. In fact, we have officially gotten our hands on a couple of Unity Build machines that we’ve named after characters in that DnD table. This test realm won’t be very feature rich, but instead places weight on ensuring all the network pieces are operating which is critical. This also means we have to spin up our build pipelines (and the before-mentioned build machines), get things deployed to the cloud, and iron out all the kinks to get us from code-written -> game-playable. I’ve added a “Technical section” below for those who might want to read up on what we’ve done so far to accomplish this. The Hightower realm will not be accessible to the general public yet, released on Steam or anything of that nature, but instead internally to trusted friends. Eventually however, we hope for Hightower to be the “PTR” - Public Test Realm. Perhaps in early Spring of 2026.
The Technical Bits, Deploying, CI, and more
In my experience, when you’re building out a server/client system from scratch - the longer you wait to actually decouple them from the local machine and deploy the service to connect to a stand-alone client, the more technical debt you acquire. Some of it is unknown to you, but either way one shoves it aside as if they’ll be simple to handle later.
I need an additional datastore for this? That’s fine, the local database emulator can handle it. I need to save files in the cloud? I’ll save them locally for now.
Recently, I decided it was time to pause my feature development and dive into the operations side of house. Continuous Intergration pipelines so that every code check in automatically makes a new build. Deployments to a cloud test environment to begin fleshing out how a real server environment. All that sort of thing. This immediately meant two things:
1. Handling all of my “local-only” implementations
Saving Generated Map Slices Locally
To get things jump started, every time we generated a new slice of the map, that slice was saved locally where the server ran. This immediately becomes an issue when:
- Many hosting platforms don’t let you fill up the VM’s hard drive (and nor should you, unless you can guarantee to get the same drive with a new VM)
- Many VMs don’t have large disks and many hosting solutions lean toward “serverless”, which is fancy for “you can’t manage the server underneath”
- Other servers don’t have the same data and will have to get it themselves somehow.
This is all not a huge deal for running our game in development mode locally, but in a cloud environment, it’s a big deal. So, we converted all this to use Azure Blob Storage. It was an easy swap at the end of the day, since the bits that Get and Save slices are behind interfaces.
Short-running instances
It’s easy to wipe your local storage and start fresh every single time you run the game in development mode locally. A deployed server cluster is far more complicated. Recently I’ve introduced shutdown and reset mechanisms that warn players of a realm shutdown, and then gracefully shutdown pieces of the “beehive” and drain connections from the server. Essentially, I’ve introduced Maintence Mode so that we can pause data-streams for Realms while we deploy a new pod of servers, if we want, for example.
2. Handling all of the “scale-out” landmines
Orleans Landmine
The beehive/grain structure detailed in other posts needs some special TLC to get working in a real server environment. There are very few offerings from Azure/AWS that are cheap but have limited support for all the special communications needed here. There are of course larger offerings, like Kubernetes, which come with the support for all the special communications, but depending on the region, require at least 2 two nodes at around $60 USD/month. I don’t think I want my test environment this early in development to cost $120/month.
Luckily I was able to settle on some middle ground in Azure, using Azure App Services with special commands that unlock private ports in a Virtual Network. It’s not an easy setup, but it’s an affordable one that works. Having not read every single letter of the docs, I didn’t know this was going to be a problem until I looked at the telemetry coming out of the server cluster:

Things were working, but I had effectively chopped up my “beehive” into two hives, as they could not communicate with each other. After researching a bunch and sleeping on it, you can see the next morning a new and improved deployment fixed the issue.
SignalR Landmine
SignalR is a realtime technology built on WebSockets that we’re using to for server<->client realtime communication. Players will essentially bind their connection to a particular server - that server becomes their broker of information.
“However, when you [add servers…], clients can get routed to different ones. A client that is connected to one server will not receive messages sent from another server.”
This can be a bad thing, especially if you’re going for a more stateless approach (any server should be able to help you). If one server knows all your details and the others do not, you could get bad data talking to the wrong one. However, with the Beehive (Orleans), this actually is not a problem. Regardless of whatever server your bind to, your comms will be sent to the server that has your worker bee. This allows us to be stateful without needing a backplane for data understanding.
So for now, this landmine remains one we haven’t stepped on. However, as the game potentially scales out further, we may have to look at SignalR Service, which can handle thousands of connections at once and not bog down the connections to the primary servers.