-
Notifications
You must be signed in to change notification settings - Fork 45
Server Research
Summarize the C10K article enough to understand which approaches it describes would benefit from a "rip out the middle man" improvement strategy.
Given that we know we want to use OpenBSD and intend to leverage a "kill the middle layer" strategy, what approach will work best?
- Serve many clients with each thread, and use nonblocking I/O and level-triggered readiness notification
I don't fully understand this but sounds complected and strictly worse than asynchronous I/O. He highlights the difference between non-blocking and asynchronous I/O. Non-blocking I/O doesn't block when waiting for network calls to finish but does when waiting for disk calls to finish. Asynchronous I/O handles both asynchronously (surprise!). - Serve many clients with each thread, and use nonblocking I/O and readiness change notification
The author describes one benefit of this model - ease of use with OpenSSL. I may be missing something, but I'm tentatively avoiding servers that use this model. - Serve many clients with each server thread, and use asynchronous I/O
In the earlier sections, the author highlights how asynchronous I/O doesn't block on disk reads and writes. However, in this section he mentions, "AIO doesn't provide a way to open files without blocking for disk I/O". I'm confused! That being said, I get the sense that async I/O, assuming reasonable implementations, presents a good balance of few warts, good performance, and reasonable semantics. Let's hope I'm not proven wrong if I choose this. - Serve one client with each server thread
The author described this as barely feasible when he wrote this. While Moore's Law hardware progress probably increased its feasibility, I'm inclined to stay away from this given his warnings about having to reduce thread stack frame sizes. On the other hand,
-
Level vs. Edge Triggered Kernel Signals
Level triggers occur continuously for a file descriptor in a certain state. With level triggers, the socket-holding program will continue to receive a signal until the file descriptor (and socket) exit the ready state. This deals with an edge case where a program registers interest in a socket after the socket initially becomes ready. Since level-triggering notifies based on state, the calling program will receive a notification from the new socket, even though the ready event technically already occurred. Edge triggers occur at the instant a file descriptor becomes ready. Edge triggering punishes calling programs that miss events. Edge triggered events only appear to calling programs once, so calling programs can't recapture missed events and the I/O they signaled.In summary, I don't understand why one would use edge-triggering over level-triggering...
- Does OpenBSD have a reasonable async I/O implementation?
- How does Go implement high performance servers?
- Can goroutines (which are essentially green threads, right?) make the one client per thread model feasible?
- Notes on High Performance Server Design: Linked in the original C10K problem article. Describes 4 high-performance bottlenecks for message-handling applications (servers).
- People have built servers into the kernel before. May be worth looking into those implementations to understand whether we're re-treading through Linux's dust.
- Need to use the web archive to access this paper about system-wide profiling of non-blocking I/O.
- Interesting page on Fast Unix Servers. Seems to be in a similar vein to the main C10K page.
- Read the original paper that describes kqueue and the distinction between edge and level triggerings.