Open
Description
Expected behavior
When I try to optimize the parameters of the simple SMA cross example provided in the tutorials, but instead use custom 1min OHLC data for a period of one month (44640 units), the following code results in an error: TypeError: buffer is too small for requested array
.
stats = bt.optimize(
n1=[10], n2=[20], maximize="Equity Final [$]", constraint=lambda param: param.n1 < param.n2
)
bt.plot(plot_equity=False, plot_return=True)
print(stats)
TypeError Traceback (most recent call last)
Cell In[19], line 1
----> 1 stats = bt.optimize(
2 n1=[10], n2=[20], maximize="Equity Final [$]", constraint=lambda param: param.n1 < param.n2
3 )
4 bt.plot(plot_equity=False, plot_return=True)
5 print(stats)
File ~/Coding/algotrading/.venv/lib/python3.12/site-packages/backtesting/backtesting.py:1630, in Backtest.optimize(self, maximize, method, max_tries, constraint, return_heatmap, return_optimization, random_state, **kwargs)
1627 return stats if len(output) == 1 else tuple(output)
1629 if method == 'grid':
-> 1630 output = _optimize_grid()
1631 elif method in ('sambo', 'skopt'):
1632 output = _optimize_sambo()
File ~/Coding/algotrading/.venv/lib/python3.12/site-packages/backtesting/backtesting.py:1527, in Backtest.optimize.<locals>._optimize_grid()
1524 shm_refs.append(shm)
1525 return shm.name, vals.shape, vals.dtype
-> 1527 data_shm = tuple((
1528 (column, *arr2shm(values))
1529 for column, values in chain([(Backtest._mp_task_INDEX_COL, self._data.index)],
1530 self._data.items())
1531 ))
1532 with patch(self, '_data', None):
...
-> 1521 buf = np.ndarray(vals.shape, dtype=vals.dtype, buffer=shm.buf)
1522 buf[:] = vals[:] # Copy into shared memory
1523 assert vals.ndim == 1, (vals.ndim, vals.shape, vals)
TypeError: buffer is too small for requested array
****
As you can see I have removed the optimization ranges and just gave one value per parameter, but it still fails. The original backtest itself, bt.run()
, runs fine, and completes in 0.5sec.
I don't know if bt.optimize()
runs some kind of vectorized calculations, where the data array can somehow end up being too big for it to handle? Can I instead run the optimization sequentially?
Code sample
Actual behavior
.
Additional info, steps to reproduce, full crash traceback, screenshots
No response
Software versions
backtesting==0.6.2