Cloud and Software Architecture, Soft skills, IOT and embedded
Advent of Code 2023 -- A bootstrap article
Get link
Facebook
X
Pinterest
Email
Other Apps
Advent of Code is 25 days of coding problems, a kind of coding dojo with a daily exercise. You can use any language for any of the problems. The code isn't analyzed or shared. Only the results count. I know people working in Python, Dart, Haskell, Rust, and C#.
There are two problems per day, solved in order. The second problem only becomes visible once the first problem is solved. Each problem is driven by an input data set customized to your login session. The solution is often a number or some other small piece of data. You won't be able to copy someone else's answer.
Any scoring or tracking is up to you. You can create your own team leader board or just do the problems without tracking against anyone else.
The problem workflow is
Go to the website to read today's problem.
Request today's input data set using your session cookie
Sign up at https://adventofcode.com. This login session will stay active for the life of the 25-day exercise. Your session cookie is live for that full-time and can be programmatically used to get the input data and post your solution.
You can sign up even if the event has already started. There is no penalty for joining late other than ranking on the leader board which you may not care about anyway.
Finding your session cookie
Your session cookie is live for that full-time and can be programmatically used to get the input data and post your solution. The cookie is part of every programmatic action.
Inspect the Advent of Code web page using your browser inspect capability. The basic startup workflow is
Open your browser to the Advent of Code web page.
Right mouse and select inspect.
This should pull up the inspector panel
Refresh the page to populate the Network panel
Select the home page request, "2023" in this picture.
Select Network in the browser inspector's top pane
Select Headers in the browser inspector's bottom pane.
Find the header called session
Copy the value of that header. You will need it to get access to your data sets.
Start coding
From Scratch
You can just download your input data set to your machine and then start hacking code.
With a Template in your language of choice
A lot of people will use a template so that they can create a fresh working file every day for that day's problem. The template retrieves that day's data, writes the data to a file and generates a shell program for that day. People like the templates because they may have basic data manipulation dependencies already included along with helper methods.
I am using a template for Dart that I Forked on GitHub. Some 2023 work is in the 2023 branch. It is a Fork of this Dart Template on GitHub. The template manages new Day-xx.dart files with hooks for timers and other metrics. You should be able to find a template in the language of your choice if you exercise your Google-Fu.
Problems appear
Problems appear at fixed intervals, currently midnight ET. The interface is minimal. You will only see the title page if the problem isn't ready yet.
Looking at a Problem
Problems appear under the current year.
1. Click on the year in the upper left corner
1. You should see the number "1" or other day in the bottom right corner. Those are hyperlinks to that day's problems.
1. The highest number page may have a timer running. That is how long it will be until the next problem becomes available.
The problem definition
You can see the problem definition page once you click on that day's link. The picture to the right is a Day-1 problem.
There are two problems per day. The 2nd problem only appears once the first problem is completed.
Problems include a problem definition and backstory along with a tiny sample data set and the solution to that problem for that tiny dataset.
The problem to the right accepts a single number as the result.
The 2nd problem space is formatted the same way as the first. It appears when the first problem is completed.
Retrieving the Data - a Sample
You can use Curl or your programming language of choice to download an input data set. I decided to use a template that included code that downloads the sample to be used while writing code and testing.
This is a snippet of Dart code that downloads the input. You can see that it has a very regular URL and expects the session cookie that we looked at above.
I do a lot of my development and configuration via ssh into my Raspberry Pi Zero over the RNDIS connection. Some models of the Raspberry PIs can be configured with gadget drivers that let the Raspberry pi emulate different devices when plugged into computers via USB. My favorite gadget is the network profile that makes a Raspberry Pi look like an RNDIS-attached network device. All types of network services travel over an RNDIS device without knowing it is a USB hardware connection. A Raspberry Pi shows up as a Remote NDIS (RNDIS) device when you plug the Pi into a PC or Mac via a USB cable. The gadget in the Windows Device Manager picture shows this RNDIS Gadget connectivity between a Windows machine and a Raspberry Pi. The Problem Windows 11 and Windows 10 no longer auto-installs the RNDIS driver that makes magic happen. Windows recognizes that the Raspberry Pi is some type of generic USB COM device. Manually running W indows Update or Upd...
The Windows Subsystem for Linux operates as a virtual machine that can dynamically grow the amount of RAM to a maximum set at startup time. Microsoft sets a default maximum RAM available to 50% of the physical memory and a swap-space that is 1/4 of the maximum WSL RAM. You can scale those numbers up or down to allocate more or less RAM to the Linux instance. The first drawing shows the default WSL memory and swap space sizing. The images below show a developer machine that is running a dev environment in WSL2 and Docker Desktop. Docker Desktop has two of its own WSL modules that need to be accounted for. You can see that the memory would actually be oversubscribed, 3 x 50% if every VM used its maximum memory. The actual amount of memory used is significantly smaller allowing every piece to fit. Click to Enlarge The second drawing shows the memory allocation on my 64GB laptop. WSL Linux defaul...
The Apache Tika project provides a library capable of parsing and extracting data and meta data from over 1000 file types. Tika is available as a single jar file that can be included inside applications or as a deployable jar file that runs Tika as a standalone service. This blog describes deploying the Tika jar as an auto-scale service in Amazon AWS Elastic Beanstalk. I selected Elastic Beanstalk because it supports jar based deployments without any real Infrastructure configuration. Elastic Beanstalk auto-scale should take care of scaling up and down for for the number of requests you get. Tika parses documents and extracts their text completely in memory. Tika was deployed for this blog using EC2 t2.micro instances available in the AWS free tier. t2.micro VMs are 1GB which means that you are restricted in document complexity and size. You would size your instances appropriately for your largest documents. Preconditions An AWS account. AWS ac...
Comments
Post a Comment