Skip to content

GitLab

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
P parabix-devel
  • Project overview
    • Project overview
    • Details
    • Activity
    • Releases
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 9
    • Issues 9
    • List
    • Boards
    • Labels
    • Service Desk
    • Milestones
  • Merge requests 2
    • Merge requests 2
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Operations
    • Operations
    • Incidents
    • Environments
  • Packages & Registries
    • Packages & Registries
    • Container Registry
  • Analytics
    • Analytics
    • CI/CD
    • Repository
    • Value Stream
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Members
    • Members
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • cameron
  • parabix-devel
  • Wiki
  • Performance Testing Script

Performance Testing Script · Changes

Page history
Update Performance Testing Script authored Feb 24, 2025 by cameron's avatar cameron
Show whitespace changes
Inline Side-by-side
Showing with 17 additions and 0 deletions
+17 -0
  • Performance-Testing-Script.md Performance-Testing-Script.md +17 -0
  • No files found.
Performance-Testing-Script.md
View page @ da0c2d8b
...@@ -8,7 +8,24 @@ inputs together with combinations of different algorithmic choices specified ...@@ -8,7 +8,24 @@ inputs together with combinations of different algorithmic choices specified
on the command line. The linux ```perf``` program is used to execute on the command line. The linux ```perf``` program is used to execute
the program and collect performance measures including instruction counts, cycle counts and branch misses. Each combination of performance parameter is run once in order to populate the object cache (and so eliminate the JIT compile time from further runs) as well as to check that each variation produces the same result. Then the program is run several times to obtain averaged measurements of the counters. the program and collect performance measures including instruction counts, cycle counts and branch misses. Each combination of performance parameter is run once in order to populate the object cache (and so eliminate the JIT compile time from further runs) as well as to check that each variation produces the same result. Then the program is run several times to obtain averaged measurements of the counters.
Here is a sample script using ```perf_stat_runner```.
```
# NFD_perf.py
from perf_stat_runner import *
if __name__ == '__main__':
tester = PerformanceTester("../build16/bin/nfd")
tester.addPositionalParameter("input", ["/home/cameron/Wikibooks/wiki-bo
oks-all.xml"])
tester.addPerformanceKey("--ByteMerging", ["0", "1"])
tester.addPerformanceKey("--ByteReplace", ["0", "1"])
tester.addPerformanceKey("--LateU21", ["0", "1"])
tester.addPerformanceKey("--thread-num", ["1"])
tester.run_tests("nfd-stats.csv")
```
It produces performance results in the ```nfd-stats.csv`` file, with
one row for each of difference performance keys and columns for instructions, cycle counts and branch data.
\ No newline at end of file
Clone repository
  • Bracket Matching
  • CSV Validation
  • CSVediting
  • CSVparsing
  • Character Code Compilers
  • KernelLibrary
  • Pablo
  • ParabixTransform
  • Parallel Deletion
  • Parallel Hashing
  • Performance Testing Script
  • Shuffle Pattern Library
  • StaticCCC
  • String Insertion
  • UCD: Unicode Property Database and Compilers
View All Pages