When you add a page in CMS (Like Wordpress' admin) what does it actually do? Add predefined table? Or what?
Thanks!

kvprajapati commented: Very good question. +6
Atli commented: good question +3

Recommended Answers

All 12 Replies

Hey.

In a properly designed CMS, changing anything just INSERTs, UPDATEs or DELETEs stuff from the database. If it is actually creating tables, or otherwise changing the database structure, there is usually something wrong with the database design.

Exceptions to this are webs with insane numbers of visitors, where the tables get so big they become unmanageable and start affecting performance. (I'm talking here about websites the size of Facebook or the WordPress blog. If you are wondering if this applies to your web, it almost definitely doesn't ;-P)

But yea, not really knowing much about the WordPress system itself, I would suspect that adding a page just INSERTs a row into a "page" table somewhere, and the contents of that page get inserted with it, or into a table related to it.

commented: Very helpful! +6

Hey.

In a properly designed CMS, changing anything just INSERTs, UPDATEs or DELETEs stuff from the database. If it is actually creating tables, or otherwise changing the database structure, there is usually something wrong with the database design.

Exceptions to this are webs with insane numbers of visitors, where the tables get so big they become unmanageable and start affecting performance. (I'm talking here about websites the size of Facebook or the WordPress blog. If you are wondering if this applies to your web, it almost definitely doesn't ;-P)

But yea, not really knowing much about the WordPress system itself, I would suspect that adding a page just INSERTs a row into a "page" table somewhere, and the contents of that page get inserted with it, or into a table related to it.

Well actually I have been told in the mysql forum that if you design a table so that it has indexing you shouldn't ever need multiple tables for the one job. That is when designed properly because with table indexing it can make the difference between minutes and milliseconds for a mysql query however will take up additional space. In case you don't know what table indexing is table indexing is one of the options when creating each table column in phpmyadmin.

Well actually I have been told in the mysql forum that if you design a table so that it has indexing you shouldn't ever need multiple tables for the one job.

Yes, for typical websites that is true.

However, I am talking about websites like Facebook and WordPress, who get more hits per day than all of our websites combined are likely to get in our lifetimes. Where the index "tables" are so large they would have to have index tables of their own to get any decent query times.

But those are extreme cases. Only apply to people that measure the hourly database size increase in Terabytes :-)

In case you don't know what table indexing is table indexing is one of the options when creating each table column in phpmyadmin.

It's not really fair to give phpMyAdmin all the credit, tho ;-)
12.1.13. CREATE INDEX Syntax

But those are extreme cases. Only apply to people that measure the hourly database size increase in Terabytes :-)

Are you at all familiar with the mediawiki cms. That is the same cms wikipedia uses and it does not spread into millions of tables. Instead it has hundreds of computers that just process the database. These are known as cache servers. So try downloading mediawiki and see how wikipedia does it and you will find there is only one table and joins are used a lot to save space. So wikipedia only has the one table for storing all articles and that can be confirmed by checking the mediawiki cms so I would recommend doing the same.

Thanks guys for inputs. It seems to me that there are many ways of doing the same :)

What do you guys suggest, suppose you were to do it?

Thanks guys for inputs. It seems to me that there are many ways of doing the same :)

What do you guys suggest, suppose you were to do it?

Indeed, there are :)

I suggest you go with the method I mentioned in my first post; creating a 'page' table, in which you add a row for each new page.

Are you at all familiar with the mediawiki cms. That is the same cms wikipedia uses and it does not spread into millions of tables.

I don't doubt that. I am in no way implying that tables need to be split on large websites. Just that it can, in extreme cases, improve performance.

Instead it has hundreds of computers that just process the database. These are known as cache servers.

Cache servers, as I understand them, don't directly factor into the database load. They cache previously requested pages - the output of what the PHP/MySQL request generated - and, instead of having PHP re-generate that output for the next person to request the same page, they serve up the same output that was previously generated.

They can be very useful for websites like Wikipedia and YouTube, who's content is generally pretty static, but aren't really of much use to websites like Facebook, who's content changes on pretty much every request.

All of these huge websites more than likely use cluster servers tho (or something equivalent), which distributes the load of a single database between a network of computers. That sounds more like what you are talking about.

They can be very useful for websites like Wikipedia and YouTube, who's content is generally pretty static, but aren't really of much use to websites like Facebook, who's content changes on pretty much every request.

Just a note on that. Wikipedia receives 1,000 new articles every day and many more edited articles so it is possible to compare Wikipedia to ever changing websites such as Facebook. I guess it is a matter of personal preference but to me it doesn't make much of a difference because the data is still stored in the same source file. That is each database has a separate file and so having a separate table will not make a too greater difference if it needs to search from a list of a million tables. So as I said the decision of this section of database design is personal preference as both ways have their advantages and disadvantages which equal out to be the same.

Just something I should have said earlier...
If placing all the data in one table, it is often best to put the big column and a few identifiers in one table then the rest of the columns in another table and use join statements instead of having all the columns in the one table. So Basically there would be two tables one with the page data and the other with little bits of info and this can speed things up when you only need the little bits of info. Just how I saw mediawiki do it.
Well that is my opinion but others have different views...

Just something I should have said earlier...
If placing all the data in one table, it is often best to put the big column and a few identifiers in one table then the rest of the columns in another table and use join statements instead of having all the columns in the one table. So Basically there would be two tables one with the page data and the other with little bits of info and this can speed things up when you only need the little bits of info. Just how I saw mediawiki do it.

Agree, this is an excellent point. Especially if you have a lot of variable-length columns. They slow queries down more than most people realize.

It's best to identify the columns you don't need on a regular basis and move them out. Like say; in a forum system, it would be a good idea to pull the actual forum post out of the main 'post' table and make the post title a fix-length column.
That way the only columns in the 'post' tables would be fixed-length columns, which is much faster to load under heavy loads, and on a typical forum, the most resource draining requests are those that request lists of topics, for which the actual text of the post is not needed.

Just a note on that. Wikipedia receives 1,000 new articles every day and many more edited articles so it is possible to compare Wikipedia to ever changing websites such as Facebook.

Facebook receives thousands of new entries every minute, and every request is tailored to fit the requester. Whereas Wikipedia, even tho articles get edited all the time, serves duplicates of the same exact content to most of it's requests, which the cache servers intercept to take the load of the databases.

And even tho new articles get added to a Wiki, if the cache servers are working correctly, the overhead to the database should be minimal. The page is added, and the first time it is requested the cache servers cache the output, from where it is relayed to the next couple of hundred requesters.
The actual database is only queried once while the content remains the same, or until a specific cache-time-limit has passed, which on a web the size of Wikipedia could be hundreds or thousands of requests.

On a scale of static to 100% dynamic - the latter being Facebook - Wikipedia is much closer to the static end, really. The same goes for most blog/news sites.

Anyways. What were we talking about again? I think we are getting a bit carried away xD
I only mentioned the split table thing to point out the exception to the rule :)

I guess it is a matter of personal preference but to me it doesn't make much of a difference because the data is still stored in the same source file. That is each database has a separate file and so having a separate table will not make a too greater difference if it needs to search from a list of a million tables.

This is the default behavior, yes, but you can change it. See: 13.6.2.1. Using Per-Table Tablespaces
And this is only relevant to InnoDB tables. MyISAM tables are each stored in three separate files, so this in no way applies to them.

This is the default behavior, yes, but you can change it. See: 13.6.2.1. Using Per-Table Tablespaces
And this is only relevant to InnoDB tables. MyISAM tables are each stored in three separate files, so this in no way applies to them.

I see how that can be handy but another thing I have noticed with sites like google and youtube is that they seem to redirect users to their country code domain meaning there is a separate database for each domain is different. Perhaps that is the more efficient way of dividing the content. Example: http://au.youtube.com
However from reading what Atli has to say using correct mysql methods with many tables should also do the job.

I see how that can be handy but another thing I have noticed with sites like google and youtube is that they seem to redirect users to their country code domain meaning there is a separate database for each domain is different. Perhaps that is the more efficient way of dividing the content. Example: http://au.youtube.com

I've always just assumed those region specific URIs directed you to a cache server closer to you. The cache server concept would remain the same, tho. They wouldn't actually be attached to a separate database, but just cache the results from the main database so that requests from a specific region wouldn't have to travel to the main database or cache servers directly.

Like say, if we both lived in Australia (which I don't xD), and I requested a specific YouTube video. My request would be routed to a local cache server (au.youtube.com) which - if not already cached - would download the video from the main servers, cache it, and send it to me. Then, later on, if you requested the same video, your request would reach the same cache server, who could send you the cached video without bothering the main servers.

They become kind of temporary "mirror" servers, creating static copies of the main dynamic site to serve to local clients.

Thanks guys, I have got alot from you :)
feel free to add any information you wish on subject, but thread is solved

Thanks guys, I have got alot from you :)
feel free to add any information you wish on subject, but thread is solved

Glad could help :)

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.