Xilinx FPGA Performance Evaluation for Vivado 2016.4 and Vivado 2017.1 using InTime.
Overall, we conducted three experiments pitting Vivado 2016.4 against Vivado 2017.1. Under our test conditions, Vivado 2017.1 achieves slightly better results in Experiment 1, while total Negative Slack and Worst Slack in Vivado 2017.1 were much better in Experiments 2 and 3.
Methods
For this performance test, we use a modified version of the Vivado example “cpu” design with the target device: xc7k70tfbg676-2. The design’s constraints were modified to make it fail timing.
Compare the two toolchains via the following steps:
- InTime generates compilation settings for Vivado 2016.4. Compile using the same settings on Vivado 2017.1.
- InTime generates compilation settings for Vivado 2017.1. Compile using the same settings on Vivado 2016.4.
Comparison between the 2 toolchains will be done using the following experiments:
- Settings used for compilations will be generated using InTime with Vivado 2016.4. The same set of settings will be used by Vivado 2017.1 for compilation.
- Settings used for compilations will be generated using InTime with Vivado 2017.1. The same set of settings will be used by Vivado 2016.4 for compilation.
Each experiment consists of 150 compilations per Vivado version. By default, InTime learns from past results and tweaks settings automatically, so when testing the performance of these toolchains, we keep the settings the same.
Experiment 1
Please see the results of experiment 1 below. The table lists some basic statistics of the results, while the graphs show the TNS and Worst Slack values of each compilation for each toolchain. An analysis of the table shows that the best TNS for both toolchains are actually quite similar. However, this is a different story for Worst Slack where Vivado 2017.1 performed better. Meanwhile, the distribution of TNS and Worst Slack for Vivado 2017.1 is also lower and less varied as compared to Vivado 2016.4, indicating that the newer Vivado toolchain is more stable and produces better results, though just slightly, than its previous version.
From the graph, we can tell that the bad results in Vivado 2016.4 (red line) have actually improved in Vivado 2017.1 (blue line). On the other hand, the good results in Vivado 2016.4 actually became worse in the newer version. Overall, it seems that the performance of Vivado 2017.1 is just slightly better than 2016.4.
Best TNS | Worst TNS | TNS Distribution | Best Worst Slack | Worst Worst Slack | Worst Slack Distribution | Compilations with results | Compilations without results | |
Vivado 2016.4 | -104.112 | -1043.770 | -373.469 ± 148.681 | -0.507 | -2.967 | -1.052 ± 0.287 | 126 | 24 |
Vivado 2017.1 | -107.214 | -819.453 | -335.846 ± 110.423 | -0.513 | -1.772 | -1.013 ± 0.189 | 120 | 30 |
Experiment 2
We reported the results of experiment 2 in a similar format as experiment 1. In this experiment, Vivado 2017.1 outperformed 2016.4. Please notice how the results of Vivado 2016.4 (red line) tend to lie just below the 2017.1 results (blue line). This indicates that the newer toolchain generally produces better results.
One might notice that the number of successful compilations for experiment 2 is much higher than that of experiment 1. This suggests that the results produced by Vivado 2017.1 has a higher correlation with the settings and this has allowed InTime to learn and avoid unsuccessful compilations.
Best TNS | Worst TNS | TNS Distribution | Best Worst Slack | Worst Worst Slack | Worst Slack Distribution | Compilations with results | Compilations without results | |
Vivado 2016.4 | -170.174 | -1046.280 | -385.884 ± 140.445 | -0.551 | -2.150 | -1.048 ± 0.234 | 148 | 2 |
Vivado 2017.1 | -93.804 | -763.664 | -350.166 ± 116.361 | -0.487 | -2.403 | -0.997 ± 0.209 | 147 | 3 |
Experiment 3, comparison against eight_bit_uc
In the third experiment, we repeated Experiment 1 using a design named “eight_bit_uc” design (target device “xc7k70tfbg484-2”). Please see the results below which show the same trend as the previous two experiments -- namely, Vivado 2017.1 outperforms Vivado 2016.4 in almost all aspects.
Best TNS | Worst TNS | TNS Distribution | Best Worst Slack | Worst Worst Slack | Worst Slack Distribution | Compilations with results | Compilations without results | |
Vivado 2016.4 | -135.998 | -177.347 | -150.798 ± 6.445 | -2.974 | -4.011 | -3.412 ± 0.196 | 146 | 4 |
Vivado 2017.1 | -118.812 | -172.239 | -141.301 ± 11.471 | -2.613 | -4.093 | -3.234 ± 0.348 | 148 | 2 |
Conclusion - Vivado 2017.1, a better choice for Xilinx FPGA
In conclusion, Vivado 2017.1 achieves better results than Vivado 2016.4 does for the designs in question. Although the timing results were somewhat close for Experiment 1, the best TNS and Worst Slack acquired from Vivado 2017.1 are much better than those from its predecessor in Experiments 2 and 3.
Similar Article :
Intel FPGA Toolchain Comparison: Quartus 16.1 VS 17.0