Hi,

I have created a commenting system which could be used for commenting articles posted. The trouble is that after doing all the development I am stuck with pagination.The architecture to store comments is to hold in filesystem in the following way.

Article
{Folder 1}
Reply 1
Reply to 1 as 1.n
reply to 1.n as 1.n.n
......
{Folder 2}
Reply 2
{Folder 3}
Reply 3

In this scenario, to boost performance i thought of getting a pagination developed . i have developed pagination system as well. The problem here is , if Folder 1 has total 16 comments and folder 2 has 12 comments, per page limit say 10. how would paginatino realize that page 2 has to list 6 comments from folder 1 and 4 from folder 2.

Thanks for the consideration and i request some hints and pointers that i could use. it would be gr8 if anyone here could share some codes / logics

Harish

Recommended Answers

All 4 Replies

I think you'll have to cache the number of files in each folder. Otherwise you'll have to recurse through the folders each time counting the number of files each time you call the pagination script.

How are you doing it at the moment?

Thanks ether for post and query.

Actually I am be caching it. We have a SQUID architecture and above all i am using memcache for caching. So i think it is fine.

The concern of 1 system files list call and 2 file read system calls for each comment display in a list of comments is well addressed with caching.

The concern is that on click of say '2' in pagination bar how would i make out that i have to start from file X.X.N in folder A to X.Y in folder B.

This is where I am stuck and worried because after developing the whole API this is where i am left like lost planet.

Yesterday i thought in terms of Btrees or something that sort so that i go through all and then reduce the system calls for reading to painated refered pages clicked. something like this ... it seems like i would go mad now... :(

some pointers and help appreciated.

Thanks ether for post and query.

Actually I am be caching it. We have a SQUID architecture and above all i am using memcache for caching. So i think it is fine.

The concern of 1 system files list call and 2 file read system calls for each comment display in a list of comments is well addressed with caching.

The concern is that on click of say '2' in pagination bar how would i make out that i have to start from file X.X.N in folder A to X.Y in folder B.

This is where I am stuck and worried because after developing the whole API this is where i am left like lost planet.

Yesterday i thought in terms of Btrees or something that sort so that i go through all and then reduce the system calls for reading to painated refered pages clicked. something like this ... it seems like i would go mad now... :(

some pointers and help appreciated.

If you cache the number or files in each folder like: (using your example)

Article
{Folder 1}
Reply 1
Reply to 1 as 1.n
reply to 1.n as 1.n.n
......
{Folder 2}
Reply 2
{Folder 3}
Reply 3

Say you cache an array.

$article1_counts = Array('folder1'=>3, 'folder2'=>1, 'folder3'=>3);

say your limit is 5 (to make the example work).

Then 1 would be folder1 and folder2 and 1 file from folder 3.
Then 2 would be 2 files from folder three and the rest from article 2.

That would do some of the work, then you'll have to read the folders from the disk to get just which files you want to start and end with - but it would narrow you to reading a maximum of 2 overlapping folders.

In memcached you could create an Array which indexes each artilce. Like:

$article_counts = array($article1_counts, $article2_counts, ...);

But add and remove each index depending on how many times its hit to keep it small. eg: if there isn't any hits on article 2 for an hour it definitely should not have lived in memcached for an hour since thats more than the cache lifetime on SQUID I bet...

I've never worked with memcached or Squid though so I'm just guessing here..

I'm choosing just the "count's" here cause it seems the easiest to update. Say a new comment is made, just have to update the count in the folder the comment file will be in...

Am I even close to what you're getting at?

Hi ether,

Your reply seems a nice pointer for me to think and work out. Aaah a way out i feel so. Thanks. I would give a try after some thoughts and post an update.

Thanks once again for the patience.

Harish

If you cache the number or files in each folder like: (using your example)

Article
{Folder 1}
Reply 1
Reply to 1 as 1.n
reply to 1.n as 1.n.n
......
{Folder 2}
Reply 2
{Folder 3}
Reply 3

Say you cache an array.

$article1_counts = Array('folder1'=>3, 'folder2'=>1, 'folder3'=>3);

say your limit is 5 (to make the example work).

Then 1 would be folder1 and folder2 and 1 file from folder 3.
Then 2 would be 2 files from folder three and the rest from article 2.

That would do some of the work, then you'll have to read the folders from the disk to get just which files you want to start and end with - but it would narrow you to reading a maximum of 2 overlapping folders.

In memcached you could create an Array which indexes each artilce. Like:

$article_counts = array($article1_counts, $article2_counts, ...);

But add and remove each index depending on how many times its hit to keep it small. eg: if there isn't any hits on article 2 for an hour it definitely should not have lived in memcached for an hour since thats more than the cache lifetime on SQUID I bet...

I've never worked with memcached or Squid though so I'm just guessing here..

I'm choosing just the "count's" here cause it seems the easiest to update. Say a new comment is made, just have to update the count in the folder the comment file will be in...

Am I even close to what you're getting at?

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.