|
|
|
@ -308,15 +308,24 @@ environment variable.
|
|
|
|
|
This is your standard select(2) backend. Not I<completely> standard, as
|
|
|
|
|
libev tries to roll its own fd_set with no limits on the number of fds,
|
|
|
|
|
but if that fails, expect a fairly low limit on the number of fds when
|
|
|
|
|
using this backend. It doesn't scale too well (O(highest_fd)), but its usually
|
|
|
|
|
the fastest backend for a low number of fds.
|
|
|
|
|
using this backend. It doesn't scale too well (O(highest_fd)), but its
|
|
|
|
|
usually the fastest backend for a low number of (low-numbered :) fds.
|
|
|
|
|
|
|
|
|
|
To get good performance out of this backend you need a high amount of
|
|
|
|
|
parallelity (most of the file descriptors should be busy). If you are
|
|
|
|
|
writing a server, you should C<accept ()> in a loop to accept as many
|
|
|
|
|
connections as possible during one iteration. You might also want to have
|
|
|
|
|
a look at C<ev_set_io_collect_interval ()> to increase the amount of
|
|
|
|
|
readyness notifications you get per iteration.
|
|
|
|
|
|
|
|
|
|
=item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows)
|
|
|
|
|
|
|
|
|
|
And this is your standard poll(2) backend. It's more complicated than
|
|
|
|
|
select, but handles sparse fds better and has no artificial limit on the
|
|
|
|
|
number of fds you can use (except it will slow down considerably with a
|
|
|
|
|
lot of inactive fds). It scales similarly to select, i.e. O(total_fds).
|
|
|
|
|
And this is your standard poll(2) backend. It's more complicated
|
|
|
|
|
than select, but handles sparse fds better and has no artificial
|
|
|
|
|
limit on the number of fds you can use (except it will slow down
|
|
|
|
|
considerably with a lot of inactive fds). It scales similarly to select,
|
|
|
|
|
i.e. O(total_fds). See the entry for C<EVBACKEND_SELECT>, above, for
|
|
|
|
|
performance tips.
|
|
|
|
|
|
|
|
|
|
=item C<EVBACKEND_EPOLL> (value 4, Linux)
|
|
|
|
|
|
|
|
|
@ -326,7 +335,7 @@ like O(total_fds) where n is the total number of fds (or the highest fd),
|
|
|
|
|
epoll scales either O(1) or O(active_fds). The epoll design has a number
|
|
|
|
|
of shortcomings, such as silently dropping events in some hard-to-detect
|
|
|
|
|
cases and rewiring a syscall per fd change, no fork support and bad
|
|
|
|
|
support for dup:
|
|
|
|
|
support for dup.
|
|
|
|
|
|
|
|
|
|
While stopping, setting and starting an I/O watcher in the same iteration
|
|
|
|
|
will result in some caching, there is still a syscall per such incident
|
|
|
|
@ -338,6 +347,13 @@ Please note that epoll sometimes generates spurious notifications, so you
|
|
|
|
|
need to use non-blocking I/O or other means to avoid blocking when no data
|
|
|
|
|
(or space) is available.
|
|
|
|
|
|
|
|
|
|
Best performance from this backend is achieved by not unregistering all
|
|
|
|
|
watchers for a file descriptor until it has been closed, if possible, i.e.
|
|
|
|
|
keep at least one watcher active per fd at all times.
|
|
|
|
|
|
|
|
|
|
While nominally embeddeble in other event loops, this feature is broken in
|
|
|
|
|
all kernel versions tested so far.
|
|
|
|
|
|
|
|
|
|
=item C<EVBACKEND_KQUEUE> (value 8, most BSD clones)
|
|
|
|
|
|
|
|
|
|
Kqueue deserves special mention, as at the time of this writing, it
|
|
|
|
@ -359,9 +375,21 @@ cause an extra syscall as with C<EVBACKEND_EPOLL>, it still adds up to
|
|
|
|
|
two event changes per incident, support for C<fork ()> is very bad and it
|
|
|
|
|
drops fds silently in similarly hard-to-detect cases.
|
|
|
|
|
|
|
|
|
|
This backend usually performs well under most conditions.
|
|
|
|
|
|
|
|
|
|
While nominally embeddable in other event loops, this doesn't work
|
|
|
|
|
everywhere, so you might need to test for this. And since it is broken
|
|
|
|
|
almost everywhere, you should only use it when you have a lot of sockets
|
|
|
|
|
(for which it usually works), by embedding it into another event loop
|
|
|
|
|
(e.g. C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>) and using it only for
|
|
|
|
|
sockets.
|
|
|
|
|
|
|
|
|
|
=item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8)
|
|
|
|
|
|
|
|
|
|
This is not implemented yet (and might never be).
|
|
|
|
|
This is not implemented yet (and might never be, unless you send me an
|
|
|
|
|
implementation). According to reports, C</dev/poll> only supports sockets
|
|
|
|
|
and is not embeddable, which would limit the usefulness of this backend
|
|
|
|
|
immensely.
|
|
|
|
|
|
|
|
|
|
=item C<EVBACKEND_PORT> (value 32, Solaris 10)
|
|
|
|
|
|
|
|
|
@ -372,12 +400,19 @@ Please note that solaris event ports can deliver a lot of spurious
|
|
|
|
|
notifications, so you need to use non-blocking I/O or other means to avoid
|
|
|
|
|
blocking when no data (or space) is available.
|
|
|
|
|
|
|
|
|
|
While this backend scales well, it requires one system call per active
|
|
|
|
|
file descriptor per loop iteration. For small and medium numbers of file
|
|
|
|
|
descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend
|
|
|
|
|
might perform better.
|
|
|
|
|
|
|
|
|
|
=item C<EVBACKEND_ALL>
|
|
|
|
|
|
|
|
|
|
Try all backends (even potentially broken ones that wouldn't be tried
|
|
|
|
|
with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as
|
|
|
|
|
C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>.
|
|
|
|
|
|
|
|
|
|
It is definitely not recommended to use this flag.
|
|
|
|
|
|
|
|
|
|
=back
|
|
|
|
|
|
|
|
|
|
If one or more of these are ored into the flags value, then only these
|
|
|
|
|