You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+98-8Lines changed: 98 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,10 +18,11 @@ In order to natively build, run, test and benchmark the library, you will need t
18
18
Python3 >= 3.9.7
19
19
```
20
20
21
-
To obtain our performance numbers as reported in our paper, we run our benchmarks in AWS EC2 ``t2.t2xlarge`` and ``c5.9xlarge`` machines.
22
-
23
21
Note that we internally use the [xorf](https://github.com/ayazhafiz/xorf) library, but we modify it as seen [here](https://github.com/claucece/chalamet/tree/main/bff-modp).
24
22
23
+
To obtain our performance numbers as reported in Table 2 of our paper, we run our benchmarks in AWS EC2 ``t2.t2xlarge`` and ``c5.9xlarge`` machines, as reported.
24
+
25
+
25
26
## Quickstart
26
27
27
28
### Local
@@ -65,38 +66,127 @@ To view documentation (in a web browser manner):
65
66
66
67
#### Benchmarking
67
68
68
-
To run a specific set of benchmarks, run (note the this process is slow. On average, it takes 12 minutes):
69
+
There are several parameters that you can pass as flag to the `cargo bench` command, so that you can test the scheme.
70
+
These are (with their default values):
71
+
72
+
```
73
+
NUMBER_OF_ELEMENTS_EXP=16 (the m value of a DB: the number of rows)
74
+
LWE_DIMENSION=1774 (The LWE dimension)
75
+
ELEMENT_SIZE_BITS=8192 # 2**13 (the size of each element in bits)
76
+
PLAINTEXT_SIZE_EXP=10 (the size of each plaintext element: determines w of a DB: the size of the rows)
77
+
NUM_SHARDS=8
78
+
DB=true (if the offline steps will be bechmarked: these steps are very slow)
79
+
KV=true (if you want to execute the keyword-based PIR or else the index-based)
80
+
81
+
```
82
+
83
+
These can also be found on the Makefile (lines 9-14).
84
+
85
+
---
86
+
87
+
To run a simple benchmark (for a DB of 2^16 x 1024B) with offline steps, run (note the this process is slow. On average, it takes 12 minutes):
69
88
70
89
```
71
90
make bench
72
91
```
73
92
74
93
This command will execute client query benchmarks and Database generation benchmarks (for more details, see the `benches/bench.rs` file).
75
94
76
-
To run all benchmarks (note that this process is very slow, it takes around 30 minutes):
95
+
---
96
+
To run all benchmarks as reported in lines 1-10 of Table 2 of our paper (note that this process is very slow, it takes around 30 minutes):
77
97
78
98
```
79
-
make bench-standard
99
+
make bench-keyword-standard
80
100
```
81
101
82
102
This command will execute client query benchmarks and Database generation benchmarks for 16, 17, 18, 19 and 20 Number of DB items (log(m)). The results of these benchmarks can be found on Table 2 of our paper.
83
103
84
104
In order to see the results of the benchmarks, navigate to the `benchmarks-x-x.txt` file.
85
105
86
-
You can also run:
106
+
---
107
+
To run all benchmarks as reported in lines 11-13 of Table 2 of our paper and of Table 3 (note that this process is significantly slow):
108
+
109
+
```
110
+
make bench-keyword-all
111
+
```
112
+
113
+
In order to see the results of the benchmarks, navigate to the `benchmarks-x-x.txt` file.
114
+
115
+
In order to make the results of lines 11-13 of Table 2 of our paper and of Table 3 of our paper easier to reproduce, we have made available these three commands:
116
+
117
+
118
+
```
119
+
make bench-keyword-20
120
+
make bench-keyword-14
121
+
make bench-keyword-17
122
+
```
123
+
124
+
which omit any offline steps, and can be run independently for 2^20 x 256B, 2^17 x 30kB and 2^14 x 100kB.
125
+
126
+
---
127
+
128
+
In order to run the benchmarks for Table 4 (index-based PIR with FrodoPIR), one can run:
87
129
88
130
```
89
-
make bench-keyword
131
+
make bench-index-standard #For lines 1-10
132
+
make bench-index-all #For lines 11-13
90
133
```
91
134
92
-
to reproduce the results of Table 3 of our paper.
135
+
Note that those directions take a long time as they execute the offline steps as well.
136
+
137
+
One can run the following to omit the offline steps:
138
+
139
+
```
140
+
make bench-index-20
141
+
make bench-index-14
142
+
make bench-index-17
143
+
```
144
+
145
+
---
93
146
94
147
If all benches build and run correctly, you should see an `Finished ... benchmarks` under them.
95
148
We use [Criterion](https://bheisler.github.io/criterion.rs/book/index.html) for benchmarking.
96
149
If you want to see and have explanations of the benchmarks, you can locally open `target/criterion/report/index.html` in your browser.
97
150
98
151
**Note**: When running the benches, a warning might appear ``Warning: Unable to complete 10 samples in 100.0s. You may wish to increase target time to 486.6s.``. If you want to silence the warning, you can change line 30 of `benches/bech.rs` file to 500 or more. Note that this will make the running of benches slower.
99
152
153
+
In order to interpret the `benchmarks-x-x.txt` files, we provide some guidance here:
154
+
155
+
156
+
First, we have the initial lines describing the parameters for the benchmark.
157
+
158
+
```
159
+
[KV] Starting benches for keyword PIR.
160
+
[KV] Setting up DB for benchmarking. This might take a while...
These simply describe the LWE parameters for running the PIR interaction, note that the [KV] part here shows that we are running the keyword PIR benchmarks. This part can take a while, as the database and the public parameters are being generated for the interaction. It also states if we are running any offline steps, which can be omitted as they are significantly slow.
166
+
167
+
Once this setup has completed, we see the following.
This describes the individual filter parameters for the filters being used, and informs us that the benchmarks will now be computed for each piece of functionality.
176
+
177
+
Each individual benchmark is then displayed in the following way.
178
+
179
+
```
180
+
Benchmarking lwe/[KV] server response, lwe_dim: 1774, matrix_height: 77824, omega: 820: Collecting 100 samples in estimated 5.2698 s (300 iterations)
lwe/[KV] server response, lwe_dim: 1774, matrix_height: 77824, omega: 820
183
+
time: [17.566 ms 17.670 ms 17.789 ms]
184
+
```
185
+
186
+
The middle time here is the average taken over the number of samples displayed. The name of the benchmark in this case is "server response", which took 17.67ms, and this is the value that we used in the paper.
187
+
188
+
In terms of Table 2, the key benchmarks are "create client query prepare" (Query), "server response" (Response), and "client parse server response" (Parsing), as these are the main online operations in the protocol.
0 commit comments