While we wait for the release of 2019 version of Vivado, we realize that we have missed our annual Vivado comparison post. So here is the Vivado 2017.4 versus 2018.3 edition. For previous year’s results, please click here.
The methods are the same as before. We use a modified version of the Vivado example “CPU” design and the example design “eight_bit_uc” that comes with InTime. The design’s constraints were modified to make it fail timing.
Each experiment consists of 150 compilations per Vivado version. By default, InTime learns from past results and tweaks settings automatically. So for these tests, we used the exact same settings for each Vivado version.
Experiment 1: “Eight_bit_uc” design
Here are the results. Since the values are negative, any orange dots above the grey line means it is a better result than 2017 toolchain.
From the charts, it looks like they are evenly matched. If we refer to the statistics,
- 2018.3 has 61 better TNS results in 150. That is 40.6%
- 2018.3 has 89 better WNS results in 150. That is 59.3%
- The best TNS goes to 2017.4 with just 1 ns difference (-52.932 versus -49.997)
- The best WNS goes to 2017.4 with 70 ps difference. (-0.67 versus -0.6)
Experiment 2: “CPU” design
For this design, there are only 133 results as 17 builds errored out.
- 2018.3 has 54 better TNS AND WNS results in 133. That is exactly the same as the previous design – 40.6% (coincidence?)
- The best TNS goes to 2017.4 with just 5 ns difference (-46.6 versus -52.786)
- For WNS, the result is a tie. The best for both versions is -1.227.
It seems like on the whole, 2017.4 appears to have an edge for the TNS in the test designs. But for WNS, 2018.3 seems to be better. That’s it for this edition. If you have any questions, feel free to comment below.
Similar Article :
Xilinx FPGA Toolchain Comparison: Vivado 2016.4 VS Vivado 2017.1