Vivado 2016.4 and Vivado 2017.1 comparison

General

Vivado 2016.4 and Vivado 2017.1 comparison

Blog Tag_InTime_analysis

In this blog post we evaluate the performance of Vivado 2016.4 and Vivado 2017.1 using InTime.

Overall, we conducted three experiments pitting 2016.4 against 2017.1. Under our test conditions, Vivado 2017.1 achieves better results, albeit the difference was slight for Experiment 1. Total Negative Slack and Worst Slack in Vivado 2017.1 were much better in Experiments 2 and 3.

Methods

For this performance test, we use a modified version of the Vivado example “cpu” design with the target device: xc7k70tfbg676-2. The design’s constraints were modified to make it fail timing.

Compare the two toolchains via the following steps:

  1. InTime generates compilation settings for Vivado 2016.4. Compile using the same settings on Vivado 2017.1.
  2. InTime generates compilation settings for Vivado 2017.1. Compile using the same settings on Vivado 2016.4.

Comparison between the 2 toolchains will be done using the following experiments:
Settings used for compilations will be generated using InTime with Vivado 2016.4. The same set of settings will be used by Vivado 2017.1 for compilation.
Settings used for compilations will be generated using InTime with Vivado 2017.1. The same set of settings will be used by Vivado 2016.4 for compilation.

Each experiment consisted of 150 compilations per Vivado version. By default, InTime learns from past results and tweaks settings automatically, so when testing the performance of these toolchains. By running the same experiment twice, one can then analyze how well or bad InTime learns from different toolchains.

Experiment 1

The results of experiment 1 can be found below. The table list some basic statistics of the results while the graphs show the TNS and Worst Slack values of each compilation for each toolchain. An analysis of the table shows that the best TNS for both toolchains are actually quite similar. This is a different story for Worst Slack where Vivado 2017.1 performed better. The distribution of TNS and Worst Slack for Vivado 2017.1 is lower and less varied as compared to Vivado 2016.4, indicating that the newer Vivado toolchain is more stable and produces better, though just slightly, better results than its previous version.

Looking at the graph, we can tell that the bad results in Vivado 2016.4 (red line) have actually improved in Vivado 2017.1 (blue line). On the other hand, the good results in Vivado 2016.4 actually became worse in the newer version. Overall, it seems that the performance of Vivado 2017.1 is just slightly better than 2016.4.

 

Best TNS Worst TNS TNS Distribution Best Worst Slack Worst Worst Slack Worst Slack Distribution Compilations with results Compilations without results
Vivado 2016.4 -104.112 -1043.770 -373.469 ± 148.681 -0.507 -2.967 -1.052 ± 0.287 126 24
Vivado 2017.1 -107.214 -819.453 -335.846 ± 110.423 -0.513 -1.772 -1.013 ± 0.189 120 30

WS_cpu_method1_2016.4vs2017.1.csvTNS_cpu_method1_2016.4vs2017.1.csv

Experiment 2

The results of experiment 2 are reported in a similar format as experiment 1. In this experiment, Vivado 2017.1 out performed 2016.4. Notice how the results of Vivado 2016.4 (red line) tend to lie just below the 2017.1 results (blue line). This indicates that the newer toolchain generally produces better results.

 

One might notice that the number of successful compilations for experiment 2 is much higher than that of experiment 1. This suggests that the results produced by Vivado 2017.1 has a higher correlation with the settings and this has allowed InTime to learn and avoid unsuccessful compilations.

 

Best TNS Worst TNS TNS Distribution Best Worst Slack Worst Worst Slack Worst Slack Distribution Compilations with results Compilations without results
Vivado 2016.4 -170.174 -1046.280 -385.884 ± 140.445 -0.551 -2.150 -1.048 ± 0.234 148 2
Vivado 2017.1 -93.804 -763.664 -350.166 ± 116.361 -0.487 -2.403 -0.997 ± 0.209 147 3

WS_cpu_method2_2016.4vs2017.1.csvTNS_cpu_method2_2016.4vs2017.1.csv

Comparison against eight_bit_uc

In the third experiment, we repeated Experiment 1 using a design named “eight_bit_uc” design (target device “xc7k70tfbg484-2”). The results are reported below and they show the same trend as the previous two experiments — namely, Vivado 2017.1 out-performs Vivado 2016.4 in almost all aspects.

 

Best TNS Worst TNS TNS Distribution Best Worst Slack Worst Worst Slack Worst Slack Distribution Compilations with results Compilations without results
Vivado 2016.4 -135.998 -177.347 -150.798 ± 6.445 -2.974 -4.011 -3.412 ± 0.196 146 4
Vivado 2017.1 -118.812 -172.239 -141.301 ± 11.471 -2.613 -4.093 -3.234 ± 0.348 148 2

WS_eight_bit_uc_method1_2016.4vs2017.1.csvTNS_eight_bit_uc_method1_2016.4vs2017.1.csv

Conclusion

In conclusion, Vivado 2017.1 achieves better results than Vivado 2016.4 for the designs in question. Although the timing results were somewhat close for Experiment 1, the best TNS and Worst Slack acquired from Vivado 2017.1 was much better than its predecessor in Experiments 2 and 3.

Leave A Comment

*
*