Hackathon questions and issues

I saw that you posted your submission before I was able to respond. So, yes, we will allow you to amend that submission. We will contact you off line.

I’ve not seen this happen before. This function has been called many thousands of times before in this program, so it is something specific with this call. This can hang if the address being referenced is outside of the address map in the select logic. If this is happening you will see a message in the logic simulation transcript stating “addressing error”.

If I am reading this correctly, the reference is in the softmax function call. The time consumed by the softmax function is minimal (and will be a rounding error in the power consumed). And the softmax is the final function to be performed. So you can sum the times up to this point and assume this is the time needed to compute the prediction.

Can you send me your neural network definition? Send it to Russell.klein@siemens.com. I will see if I can reproduce the problem.

To keep you progressing, just assume that this function took zero time. And look at the sum of the times to that point.

Thanks, Russ

The rule states that “area used by the accelerators and the memory needed to hold the weights and any intermediate values”.

Could you specify on how the area would be counted. I assume for each accelerator we count the total area reported by catapult after “extract” stage. But how about the memory?

Thanks.

Hello.

I have run into the same issue. Has there been a solution or should I just stop the simulation and continue with the results at the time of the stall?

Hi Arisg,

Your inference time will be very close to the time that the last layer reported completing, so you can just use this.

If you want to run the simulation to completion, the bug is in the file catapult_conv2d.cpp in the cpp directory. On line 344 is the definition of the variable “input_image_mem”. This is an array with a first dimension hard-coded to 10. One or more of your convolution layers has more than 10 channels. This array needs to be set to the greatest number of channels in any of your convolutional layers. Or you can just set it to 200 (or some large number that’s not too large, but bigger than the channel count you might use).

I have fixed this, but rolling it out means that everyones VM will be reset, causing them to lose all work in progress. And I don’t want to do that.

Regards, Russ

Hi Arisg,

Welcome to the Hackathon!

The area score for memory will be 0.4 square microns per bit needed for storing weights, features, and intermediate results.

If you look at the end of the file memory.h in the include file, there is a define for “MEMORY_SIZE”. This is the total memory area needed, in words. Multiply this by the “WORD_SIZE” you used for quantizing the network (the default is 32, if you did not change it). The multiply that. by 0.4 to get the area consumed by memory.

Let me know if you have further questions.

Regards, Russ

I’m having trouble partitioning my memory for pipelining aggressively.

Eg. : what’s the right syntax and placement of the pragma to achieve this?


    hw_cat_type bias_values[500];
#pragma hls_array_partition variable = bias_values cyclic factor = BUS_STRIDE dim = 1

is it before or after the variable declaration?

No matter what I do I get a warning from catapult that the pragma wasn’t attached to any variable.

Thanks!

Hi ornulu,

In most cases you will want to put the pragma before the array in general.

But you should not need to edit this, as memory access should appear back-to-back on the AXI bus.

Would you be able to provide the warning/error message you are getting?

-Cameron

Hi ornulu,

Also, I forgot to add this in the first response. The proper syntax for interleave for Catapult is #pragma hls_interleave BUS_STRIDE no cyclic factor.

but please take a look at the previous comment first.

Thank you,

Cameron

Thanks!

The warning I was getting was “Cannot bind pragma “hls_array_partition” to any valid construct. Please check if a valid construct follows the pragma”

Like you said, maybe it’s not required for pipelining. (I can’t get the II=1 to work) so was wondering if this would be a path forward.

Also, I’m nearing the end of my 1-month ODT promocode, but the contest runs until the 31st. I’m guessing I’ll lose access to the labs after my trial, and won’t be able to work with Catapult?

I was trying to profile the power usage but ran into this error. (memories not defined as black boxes)

Did I miss anything?

Hi,

For the power profile, what accelerator are you trying to run? The power analysis will need to be ran on each accelerator individually and not over the whole design.

I would take a closer look at the design parameters and make sure that the edits you are making makes sense for the overall process. It very easy to get lost in the woods of optimization!

The competition is built with the thought of having a fixed 30 days within a 3 month period. You can still submit your submission after the ODT runs out, but the original license you are using will likely expire.

Hi,
I was using it on the dense accelerator only. Will try to submit my design before my license expires :crossed_fingers:

Hello,

I submitted my entry on October 27th at 18:11 (EET).
I have not seen my entry listed in the results or received any update regarding it.
Could you please confirm whether my submission was received successfully and if there are any updates available on the results?

Thanks in advance,
Aris Ilias Goutis

1 Like

Hello!

Your submission was successfully received and is currently under review. Keep an eye out on the main Hackathon page, the leaderboard will be updated soon!

Thank you for your patience and participation.