Showing
1 changed file
with
14 additions
and
43 deletions
1 | 1 | FOR VERSION 0.2 |
2 | +=============== | |
2 | 3 | |
3 | 4 | - use memNewRef instead of memcpy wherever possible. This should |
4 | 5 | improve the performance again. On the other hand. Don't stop |
... | ... | @@ -52,50 +53,8 @@ FOR VERSION 0.2 |
52 | 53 | |
53 | 54 | - IPV6 support (optional) |
54 | 55 | |
55 | -- optimize session handling. Right now session handling slows down | |
56 | - everything incredibly...especially if there would be some clients | |
57 | - that don't use cookies... | |
58 | - The reason is that for every request all sessions are checked if they | |
59 | - could be closed. If the session amount increases this slows down everything. | |
60 | - TODO: | |
61 | - * add an additional storage layer, an ordered list indexed by lifetime | |
62 | - each element of this list is an array of session ids that have this | |
63 | - lifetime. This would have been a classical queue, if there wasn't the | |
64 | - need to update them in between. | |
65 | - So three cases: | |
66 | - * delete: the element is on the beginning of the list. | |
67 | - * insert: always put to the end of the list. | |
68 | - * update: the lifetime could be get in O(log n) from the | |
69 | - session hash. If the list is no list but an array | |
70 | - I could get to the element in O(log n) via successive | |
71 | - approximation. But this would involve a memmov afterwords. | |
72 | - Additionally it will make memory menagement more difficult. | |
73 | - We need a large enough array that may grow in time. | |
74 | - With the optimizes memory management this leaves us with | |
75 | - large memory segments that might be never used... | |
76 | - so this would involve splitting again. | |
77 | - So a better alternative might be again a tree... | |
78 | - Workflow to handle sessions updates would be: | |
79 | - * lookup session in session hash O(log n) | |
80 | - * remove session from session time tree O(log n) | |
81 | - * if session it timeout | |
82 | - * remove session from session hash O(log n) | |
83 | - * else | |
84 | - * update timeout O(1) | |
85 | - * insert session again in session timeout tree O(log n) | |
86 | - So I end up with a complexity of 3*O(log n) not to bad. | |
87 | - But each lifetime might hold sevaral session ids, so | |
88 | - with an update this again has to be looked up. | |
89 | - Anyway it will be faster than walking through a maybe very | |
90 | - large monster list of sessions. | |
91 | - Also one should keep in mind that the timeout list has | |
92 | - a maximum of timeout second entries. | |
93 | - * store the lowest lifetime (ideally it could be retrieved by the key of | |
94 | - the first element of the previous list. | |
95 | - * try to delete sessions only if the lowest lifetime is expired. | |
96 | - * store the sessions in a hash indexed by id again. | |
97 | - | |
98 | 56 | FOR VERSION 1.0 |
57 | +=============== | |
99 | 58 | |
100 | 59 | - support for multiple worker processes. (optional) |
101 | 60 | - I need a distributed storage system and a way to distribute |
... | ... | @@ -112,3 +71,15 @@ FOR VERSION 1.0 |
112 | 71 | provide each gdbm function just with a 'n' prefix. |
113 | 72 | All this might result in either synchronization or performance |
114 | 73 | problems but at least I will give it a try. |
74 | + | |
75 | +SOONER OR LATER | |
76 | +=============== | |
77 | + | |
78 | +- Cookie discloser....(this is used everywhere now) | |
79 | + ====== | |
80 | + Cookie Disclosure | |
81 | + This website uses cookies. By continuing to use the website, | |
82 | + you consent to the use of cookies. | |
83 | + [link]Learn More (on the UPS page this links a pdf document) | |
84 | + [link]Do not show this message again(disable showing of this message) | |
85 | + ====== | ... | ... |
Please
register
or
login
to post a comment