Skip to content

GitLab

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
P parabix-devel
  • Project overview
    • Project overview
    • Details
    • Activity
    • Releases
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 9
    • Issues 9
    • List
    • Boards
    • Labels
    • Service Desk
    • Milestones
  • Merge requests 2
    • Merge requests 2
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Operations
    • Operations
    • Incidents
    • Environments
  • Packages & Registries
    • Packages & Registries
    • Container Registry
  • Analytics
    • Analytics
    • CI/CD
    • Repository
    • Value Stream
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Members
    • Members
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • cameron
  • parabix-devel
  • Wiki
  • Performance Testing Script

Performance Testing Script · Changes

Page history
Create Performance Testing Script authored Feb 24, 2025 by cameron's avatar cameron
Show whitespace changes
Inline Side-by-side
Showing with 14 additions and 0 deletions
+14 -0
  • Performance-Testing-Script.md Performance-Testing-Script.md +14 -0
  • No files found.
Performance-Testing-Script.md 0 → 100644
View page @ 2f14e513
When developing Parabix applications, it is often useful
to compare the performance of different algorithm choices.
A useful and flexible script for this can be built using the
```QA/perf_stat_runner.py``` tool.
The idea of this tool is to run a particular program with its
inputs together with combinations of different algorithmic choices specified
on the command line. The linux ```perf``` program is used to execute
the program and collect performance measures including instruction counts, cycle counts and branch misses. Each combination of performance parameter is run once in order to populate the object cache (and so eliminate the JIT compile time from further runs) as well as to check that each variation produces the same result. Then the program is run several times to obtain averaged measurements of the counters.
Clone repository
  • Bracket Matching
  • CSV Validation
  • CSVediting
  • CSVparsing
  • Character Code Compilers
  • KernelLibrary
  • Pablo
  • ParabixTransform
  • Parallel Deletion
  • Parallel Hashing
  • Performance Testing Script
  • Shuffle Pattern Library
  • StaticCCC
  • String Insertion
  • UCD: Unicode Property Database and Compilers
View All Pages