Skip to content
This repository was archived by the owner on May 3, 2024. It is now read-only.

Commit 244bc77

Browse files
Fixed formatting issues in files
Signed-off-by: Pratik Patil <[email protected]>
2 parents 31f56e4 + 67f25e6 commit 244bc77

File tree

11 files changed

+115
-107
lines changed

11 files changed

+115
-107
lines changed

.xperior/testds/motr-single_tests.yaml

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -490,7 +490,14 @@ Tests:
490490
- id : 37protocol
491491
script : 'm0 run-st 37protocol'
492492
dir : src/scripts
493-
executor : Xperior::Executor::MotrTest
493+
# This ST confirms that no BE structures are changed
494+
# in the new version of the code since that would
495+
# create a mismatch of BE structures and thus cause
496+
# corruption in case of upgrades. During development
497+
# phase the BE structures are assumed to be modified
498+
# and to avoid this ST from failing it is better to disable
499+
# the ST until a GA release is done.
500+
executor : Xperior::Executor::Skip
494501
sandbox : /var/motr/root/sandbox.st-37protocol
495502
groupname: 01motr-single-node
496503
polltime : 30

doc/HLD-FOP-State-Machine.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ See [4] and [5] for the description of fop architecture.
2323
* fop state machine (fom) is a state machine [6] that represents the current state of the fop's [r.fop]ST execution on a node. fom is associated with the particular fop and implicitly includes this fop as part of its state.
2424
* a fom state transition is executed by a handler thread[r.lib.threads]. The association between the fom and the handler thread is short-living: a different handler thread can be selected to execute the next state transition.
2525

26-
## Requirements
26+
## Requirements
2727
* `[r.non-blocking.few-threads]` : Motr service should use a relatively small number of threads: a few per processor [r.lib.processors].
2828
* `[r.non-blocking.easy]`: non-blocking infrastructure should be easy to use and non-intrusive.
2929
* `[r.non-blocking.extensibility]`: addition of new "cross-cut" functionality (e.g., logging, reporting) potentially including blocking points and affecting multiple fop types should not require extensive changes to the data structures for each fop type involved.
@@ -35,7 +35,7 @@ See [4] and [5] for the description of fop architecture.
3535
## Design Highlights
3636
A set of data structures similar to one maintained by a typical thread or process scheduler in an operating system kernel (or a user-level library thread package) is used for non-blocking fop processing: prioritized run-queues of fom-s ready for the next state transition and wait-queues of fom-s parked waiting for events to happen.
3737

38-
## Functional Specification ##
38+
## Functional Specification
3939
A fop belongs to a fop type. Similarly, a fom belongs to a fom type. The latter is part of the corresponding fop type. fom type specifies machine states as well as its transition function. A mandatory part of fom state is a phase, indicating how far the fop processing progressed. Each fom goes through standard phases, described in [7], as well as some fop-type specific phases.
4040

4141
The fop-type implementation provides an enumeration of non-standard phases and state-transition function for the fom.
@@ -121,13 +121,13 @@ The network request scheduler (NRS) has its queue of fop-s waiting for the execu
121121
## Security Model
122122
Security checks (authorization and authentication) are done in one of the standards fom phases (see [7]).
123123

124-
## Refinement ##
124+
## Refinement
125125
The data structures, their relationships, concurrency control, and liveness issues follow quite straightforwardly from the logical specification above.
126126

127-
## State ##
127+
## State
128128
See [7] for the description of fom state machine.
129129

130-
## Use Cases ##
130+
## Use Cases
131131

132132
**Scenarios**
133133

@@ -183,7 +183,7 @@ Scenario 4
183183
|Response| handler threads wait on a per-locality condition variable until the locality run-queue is non-empty again. |
184184
|Response Measure|
185185

186-
## Failures ##
186+
## Failures
187187
- Failure of a fom state transition: this lands fom in the standard FAILED phase;
188188
- Dead-lock: dealing with the dead-lock (including ones involving activity in multiple address spaces) is outside of the scope of the present design. It is assumed that general mechanisms of dead-lock avoidance (resource ordering, &c.) are used.
189189
- Time-out: if a fom is staying on the wait-list for too long, it is forced into the FAILED state.

doc/HLD-Resource-Management-Interface.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ Motr functionality, both internal and external, is often specified in terms of r
5959
- `[r.resource.power]`: (electrical) power consumed by a device is a resource.
6060

6161

62-
## Design Highlights ##
62+
## Design Highlights ##
6363
- hierarchical resource names. Resource name assignment can be simplified by introducing variable length resource identifiers.
6464
- conflict-free schedules: no observable conflicts. Before a resource usage credit is canceled, the owner must re-integrate all changes made to the local copy of the resource. Conflicting usage credits can be granted only after all changes are re-integrated. Yet, the ordering between actual re-integration network requests and cancellation requests can be arbitrary, subject to server-side NRS policy.
6565
- resource management code is split into two parts:

doc/HLD-of-FOL.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ A FOL is a central M0 data structure, maintained by every node where the M0 core
1919

2020
Roughly speaking, a FOL is a partially ordered collection of FOL records, each corresponding to (part of) a consistent modification of the file system state. A FOL record contains information determining the durability of the modification (how many volatile and persistent copies it has and where etc.) and dependencies between modifications, among other things. When a client node has to modify a file system state to serve a system call from a user, it places a record in its (possibly volatile) FOL. The record keeps track of operation state: has it been re-integrated to servers, has it been committed on the servers, etc. A server, on receiving a request to execute an update on a client's behalf, inserts a record, describing the request into its FOL. Eventually, FOL is purged to reclaim storage, culling some of the records.
2121

22-
## Definitions ##
22+
## Definitions
2323
- a (file system) operation is a modification of a file system state preserving file system consistency (i.e., when applied to a file system in a consistent state it produces a consistent state). There is a limited repertoire of operation types: mkdir, link, create, write, truncate, etc. M0 core maintains serializability of operation execution;
2424
- an update (of an operation) is a sub-modification of a file system state that modifies the state on a single node only. For example, a typical write operation against a RAID-6 striped file includes updates that modify data blocks on a server A and updates which modify parity blocks on a server B;
2525
- an operation or update undo is a reversal of state modification, restoring the original state. An operation can be undone only when the parts of the state it modifies are compatible with the operation having been executed. Similarly, an operation or update redo is modifying state in the "forward" direction, possibly after undo;
@@ -43,7 +43,7 @@ Roughly speaking, a FOL is a partially ordered collection of FOL records, each c
4343
<strong>Note</strong>: It would be nice to refine the terminology to distinguish between operation description (i.e., intent to carry it out) and its actual execution. This would make a description of dependencies and recovery less obscure, at the expense of some additional complexity.
4444
</p>
4545

46-
## Requirements ##
46+
## Requirements
4747

4848
- `[R.FOL.EVERY-NODE]`: every node where M0 core is deployed maintains FOL;
4949
- `[R.FOL.LOCAL-TXN]`: a node FOL is used to implement local transactional containers
@@ -68,23 +68,23 @@ Roughly speaking, a FOL is a partially ordered collection of FOL records, each c
6868
- `[R.FOL.ADDB]`: FOL is integrated with ADDB. ADDB records matching a given FOL record can be found efficiently;
6969
- `[R.FOL.FILE]`: FOL records pertaining to a given file (-set) can be found efficiently.
7070

71-
## Design Highlights ##
71+
## Design Highlights
7272
A FOL record is identified by its LSN. LSN is defined and selected as to be able to encode various partial orders imposed on FOL records by the requirements.
7373

74-
## Functional Specification ##
74+
## Functional Specification
7575
The FOL manager exports two interfaces:
7676
- main interface used by the request handler. Through this interface FOL records can be added to the FOL and the FOL can be forced (i.e., made persistent up to a certain record);
7777
- auxiliary interfaces, used for FOL pruning and querying.
7878

79-
## Logical Specification ##
79+
## Logical Specification
8080

81-
### Overview ###
81+
### Overview
8282
FOL is stored in a transactional container [1] populated with records indexed [2] by LSN. An LSN is used to refer to a point in FOL from other meta-data tables (epochs table, object index, sessions table, etc.). To make such references more flexible, a FOL, in addition to genuine records corresponding to updates, might contain pseudo-records marking points of interest in the FOL to which other file system tables might want to refer (for example, an epoch boundary, a snapshot origin, a new server secret key, etc.). By abuse of terminology, such pseudo-records will be called FOL records too. Similarly, as part of the redo-recovery implementation, DTM might populate a node FOL with records describing updates to be performed on other nodes.
8383

8484
[1][R.BACK-END.TRANSACTIONAL] ST
8585
[2][R.BACK-END.INDEXING] ST
8686

87-
### Record Structure ###
87+
### Record Structure
8888
A FOL record, added via the main FOL interface, contains the following:
8989
- an operation opcode, identifying the type of file system operation;
9090
- LSN;
@@ -100,11 +100,11 @@ A FOL record, added via the main FOL interface, contains the following:
100100
- distributed transaction management data, including an epoch this update and operation, are parts of;
101101
- liveness state: a number of outstanding references to this record.
102102

103-
### Liveness and Pruning ###
103+
### Liveness and Pruning
104104
A node FOL must be prunable if only to function correctly on a node without persistent storage. At the same time, a variety of sub-systems both from M0 core and outside of it might want to refer to FOL records. To make pruning possible and flexible, each FOL record is augmented with a reference counter, counting all outstanding references to the record. A record can be pruned if its reference count drops to 0 together with reference counters of all earlier (in lsn sense) unpruned records in the FOL.
105105

106106

107-
### Conformance ###
107+
### Conformance
108108
- `[R.FOL.EVERY-NODE]`: on nodes with persistent storage, M0 core runs in the user space and the FOL is stored in a database table. On a node without persistent storage, or M0 core runs in the kernel space, the FOL is stored in the memory-only index. Data-base and memory-only index provide the same external interface, making FOL code portable;
109109
- `[R.FOL.LOCAL-TXN]`: request handler inserts a record into FOL table in the context of the same transaction where the update is executed. This guarantees WAL property of FOL;
110110
- `[R.FOL]`: vacuous;
@@ -129,36 +129,36 @@ A node FOL must be prunable if only to function correctly on a node without pers
129129
- `[R.FOL.FILE]`: an object index table, enumerating all files and file sets for the node contains references to the latest FOL record for the file (or file-set). By following the previous operation LSN references the history of modifications of a given file can be recovered.
130130

131131

132-
### Dependencies ###
132+
### Dependencies
133133
- back-end:
134134
- `[R.BACK-END.TRANSACTIONAL] ST`: back-end supports local transactions so that FOL could be populated atomically with other tables.
135135
- `[R.BACK-END.INDEXING] ST`: back-end supports containers with records indexed by a key.
136136

137-
### Security Model ###
137+
### Security Model
138138
FOL manager by itself does not deal with security issues. It trusts its callers (request handler, DTM, etc.) to carry out necessary authentication and authorization checks before manipulating FOL records. The FOL stores some security information as part of its records.
139139

140-
### Refinement ###
140+
### Refinement
141141
The FOL is organized as a single indexed table containing records with LSN as a primary key. The structure of an individual record as outlined above. The detailed main FOL interface is straightforward. FOL navigation and querying in the auxiliary interface are based on a FOL cursor.
142142

143-
## State ##
143+
## State
144144
FOL introduces no extra state.
145145

146146
## Use Cases
147-
### Scenarios ###
147+
### Scenarios
148148

149149
FOL QAS list is included here by reference.
150150

151-
### Failures ###
151+
### Failures
152152
Failure of the underlying storage container in which FOL is stored is treated as any storage failure. All other FOL-related failures are handled by DTM.
153153

154-
## Analysis ##
154+
## Analysis
155155

156-
### Other ###
156+
### Other
157157
An alternative design is to store FOL in a special data structure, instead of a standard indexed container. For example, FOL can be stored in an append-only flat file with starting offset of a record serving as its lsn. The perceived advantage of this solution is avoiding overhead of full-fledged indexing (b-tree). Indeed, general-purpose indexing is not needed, because records with lsn less than the maximal one used in the past are never inserted into the FOL (aren't they?).
158158

159159
Yet another possible design is to use db4 extensible logging to store FOL records directly in a db4 transactional log. The advantage of this is that forcing FOL up to a specific record becomes possible (and easy to implement), and the overhead of indexing is again avoided. On the other hand, it is not clear how to deal with pruning.
160160

161-
### Rationale ###
161+
### Rationale
162162
The simplest solution first.
163163
## References
164164
[0] FOL QAS

0 commit comments

Comments
 (0)