Skip to content

v1.4.0

Compare
Choose a tag to compare
@github-actions github-actions released this 04 Dec 11:46
255854d

BenchmarkTools v1.4.0

Diff since v1.3.2

Merged pull requests:

Closed issues:

  • tune! overrides evals=1, causing errors in destructive benchmarks (#24)
  • judge working with memory (#70)
  • Replace markdown manual/reference documentation with actual docstrings + Documenter.jl (#133)
  • Change median and mean export, or update documentation (#146)
  • Request: @BProfile macro for profiling (#169)
  • Change return or print units (#170)
  • @Btime not printing as expected in a loop in Atom (#184)
  • No stable documentation (only dev) (#219)
  • @benchmarkset seems to be missing cases (#221)
  • Incorrect(?) bar heights and plot labels from @benchmark (#246)
  • Spurious performance penalty for single-element Union (#262)
  • @btime errors because tune! does not execute setup (#264)
  • tune! ignores evals=1 of @benchmarkable (#266)
  • Number of significant digits in @Btime seems to be an overkill (#288)
  • How do you save benchmark results? (#289)
  • Silence warnings when using @benchmark (#295)
  • Tuning ignores explicit evals parameter (#297)
  • Compiler optimization example outdated? (#298)
  • Unexpected Behaviour of @btime - cubic splines evaluation (#299)
  • @btime has constant runtime whereas @time is dependent on input (#301)
  • Automatically create keys in BenchmarkGroup (#308)
  • Re-edit docs for multiple setup (#315)
  • Add logo? (#316)
  • tune! on benchmarkable with evals set. (#328)
  • Removing leaves export (#332)
  • Remember keyword parameters for tune! (#337)
  • [Feature Request] Comparing two functions (#338)
  • Feature request: asynchronously build a benchmark group (#345)