TODO
2.88 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
FOR VERSION 0.2
- add a first simple session persistence stuff.
This could store informations like the username and the last login
date and present these to the user.
- Let a user being able to create tasks and get them again after login.
- Implement roles and role based access model.
- Create management for roles.
- Give a user the ability to open tasks to other users / roles....
- right now i will use long polling ajax calls when feedback from to the client
is needed. In the long term this should be changed to websockets (ws). But
right now ws specification is not final anyway. :) (optional)
- IPV6 support (optional)
- optimize session handling. Right now session handling slows down
everything incredibly...especially if there would be some clients
that don't use cookies...
The reason is that for every request all sessions are checked if they
could be closed. If the session amount increases this slows down everything.
TODO:
* add an additional storage layer, an ordered list indexed by lifetime
each element of this list is an array of session ids that have this
lifetime. This would have been a classical queue, if there wasn't the
need to update them in between.
So three cases:
* delete: the element is on the beginning of the list.
* insert: always put to the end of the list.
* update: the lifetime could be get in O(log n) from the
session hash. If the list is no list but an array
I could get to the element in O(log n) via successive
approximation. But this would involve a memmov afterwords.
Additionally it will make memory menagement more difficult.
We need a large enough array that may grow in time.
With the optimizes memory management this leaves us with
large memory segments that might be never used...
so this would involve splitting again.
So a better alternative might be again a tree...
Workflow to handle sessions updates would be:
* lookup session in session hash O(log n)
* remove session from session time tree O(log n)
* if session it timeout
* remove session from session hash O(log n)
* else
* update timeout O(1)
* insert session again in session timeout tree O(log n)
So I end up with a complexity of 3*O(log n) not to bad.
But each lifetime might hold sevaral session ids, so
with an update this again has to be looked up.
Anyway it will be faster than walking through a maybe very
large monster list of sessions.
Also one should keep in mind that the timeout list has
a maximum of timeout second entries.
* store the lowest lifetime (ideally it could be retrieved by the key of
the first element of the previous list.
* try to delete sessions only if the lowest lifetime is expired.
* store the sessions in a hash indexed by id again.
FOR VERSION 1.0
- support for multiple worker processes. (optional)