1. MNIST Hackathon

    1. MNIST Overview

      Overview

      Energy can be critical in edge devices. Systems that are battery powered or rely on harvested energy need to be as efficient as possible. Which can make deploying inferencing on these systems challenging. Inferencing is notoriously power hungry. In this hackathon we focused on building an efficient inferencing accelerator. The winners were those that built the most efficient inferencing system, one that delivered predictions with the smallest amount of energy per inference. Implementations met strict performance, accuracy, and area requirements, too.

      Oh, and there was a time limit, 30 days to complete the masterpiece of efficient engineering.

  2. Leaderboard

    1. Leaderboard

      Accelerating Interfacing Using HLS Hackathon Leaderboard
  3. Leaderboard Insights

    Hackathon Podcast

    Catch Russell Klein and Cameron Villone on our hackathon podcast that covered submissions, tips, and exclusive updates during the HLS 2025 Hackathon.

  4. The Algorithm

    1. The Algorithm

      We used the MNIST handwritten character recognition algorithm. Sure, it’s old and it’s small, but it is the one Yann LeCun got started with (and he’s now the chief AI guy at Meta).

      We picked this because it's small enough to retrain in a few minutes, it's practical to run in logic simulation, and it can be characterized quickly.

  5. The Starting Line

    1. The Starting Line

      Participants got given a virtual machine equipped with all the tools and IP needed to build and characterize an ASIC implementation of their inferencing accelerator, courtesy of Siemens EDA. They started with RocketCore RISC-V design, and a bare metal application that runs the MNIST algorithm. The job was to make the inference run faster than any software implementation could possibly go, all while your design sips tiny amounts of energy to get the job done. It was a unique opportunity to flex creativity, only if you were up for the challenge.

  6. Objectives Title

    Objectives

  7. Accuracy

    1. Objective 1

      First and foremost was accuracy. Your implementation needed to be able to correctly recognize 95% of the images from the MNIST database. Any less accurate, sorry, you didn't make the cut.

    2. Accuracy Image

  8. Performance

    1. Objective 2

      Second, was performance. No slow-pokes were allowed. Your inference should've completed in less than 20 milliseconds.

    2. Performance Image

  9. Area

    1. Objective 3

      Third, there was only so much space on the die. Whatever you built, it needed to fit into 1,000,000 square microns, based on the AISC library provided. This included the area used by the accelerators and the memory needed to hold the weights and any intermediate values.

    2. Area Image

  10. Energy consumption

    1. Objective 4

      Finally, if your design was accurate enough, fast enough, and small enough, then our panel of esteemed judges evaluated it. No style points here, though. There’s just one thing we looked at, and that is energy per inference. Remember Ohm’s law? Energy is power times time. We measured the time it takes the inference to complete multiplied by the sum of the dynamic power and the leakage power, averaged over 10 inferences.


    2. Energy Consumption Image

  11. Objectives Conclusion

    Was your strategy being to go as fast as possible? Or did you want your power to be as low as possible, but mosey through the calculations? Or perhaps the middle of the road, kinda fast and kinda low power? Check the leaderboard up top to see how the designs compared.

  12. Winning Criteria Recap

    Winning Criteria Recap

    The winning design was the one that met the accuracy, performance, and area criteria, and consumed the least average energy per inference. Participants used PowerPro from Siemens EDA to measure the power of their combined hardware and software system.

  13. Prizes Awarded

    All participants who completed the hackathon with a valid submission received a badge suitable for promoting their engineering genius on LinkedIn. If you made it to the top 3 have you posted it up on LinkedIn that you won, the High-Level Synthesis low-energy inferencing hackathon?


    🥇The first-place winner got a Elegoo’s Neptune 3D printer, and an opportunity to speak at the Edge AI foundation’s Fall Taipei event (physically or virtually). Literally, and we literally mean this “literally,” fame and fortune.


    🥈The second-place winner got a Pynq FPGA development board from Digilent, to hone their skills for next year’s competition. The Pynq board combines an ARM processor with Python and AI capabilities with FPGA fabric from AMD.


    🥉The third-place winner can’t hear you because they’re enjoying their Bose QuietComfort Earbuds. They're likely lost in a blissful bubble of iconic audio, completely undisturbed by mere mortals thanks to that renowned noise cancellation. With a relentless, long-lasting battery powering their escape, all neatly packed into a compact, durable design, who can blame them? If you're feeling a sudden pang of 'earbud envy,' we totally get it!



    Make sure to check HLS Academy often; you might be the one tuning out the world (and your colleagues) at our next hackathon. 😉