hallo there,

Its a very complicate problem and i wiil try to be as brief as i can. I hope you ll be able to understand my description.

i ve coded a script that works perfect in my localhost. When i uploaded the files to a server some things just did not work.

For the script i used 2 mysql tambles. The tables have the same exact structure and different names. They are 'big' tables. 46 fields each.

the fields from the tables fills by forms section by section (lets say that a section is 3-5 fields). (one insert and then with updates).

It seems that when most fields are already filled up the next fields just wont update. There are no errors. When i try to fill up those fields by hand using PHPMYADMIN and the same sql command i used in my code the fields updates just fine.

Is there something i am missing? Do u thing it is a coding mistake or there is a apache setting i did not think about? it is important to mention that if i reapet the procedure by starting filling the table from the section that wouldnt update it will, and soon another section wont update.

i know i may did not give a quite good description. If you dont understand what i am saynig i will try again including some code..

Recommended Answers

All 15 Replies

Did you try:

  1. var_dump()-ing your query statement?
  2. var_dump()-ing the result object of your query?
  3. var_dump()-ing all input to your query, to see if your query is actually executed the way you expect it to be?

:)

i googled var_dump and i just used it. this is the code:

    elseif  ($flag==1 & $flag1==1)     
       { 
           //update magazine_tmp
       $quer_up=mysql_query("UPDATE magazine_tmp SET arth7='$arthro7', author7='$author7', title7='$title7', checked7='ok' where year='$year' and month='$month'"); 
       //update magazine
        $quer_up1=mysql_query("UPDATE magazine SET arth7='$arthro7', author7='$author7', title7='$title7', im7='Yes', checked7='ok' where year='$year' and month='$month'"); 

        echo var_dump( $quer_up)."<br>"; 
         echo var_dump( $quer_up1)."<br>"; 
         mysql_close();
       ///      header('Location: arthro7.php?info='.$year.$month.$user);
            exit();     

       }

the echo results:
bool(true)
bool(false)

the tables magazine_tmp and magazine have the same structure. By the time this code executes some fields have data of one table and some fielda of the other - not necessary the same.
when i copy the sql statment that wont execute and pasted it into phpmyadmin and executed it there it worked...

how can i find out why this update wont work. What should i do?

Member Avatar for Zagga

Hi,

Try adding some error reporting to the query that is failing ($quer_up1). Add an 'or die' command.

$quer_up1=mysql_query("UPDATE magazine SET arth7='$arthro7', author7='$author7', title7='$title7', im7='Yes', checked7='ok' where year='$year' and month='$month'") or die('Error - Could not perform the query: ' . mysql_error());

Yes and you could var_dump() your query statements to see if they contain all required data. E.g.:

var_dump("UPDATE magazine_tmp SET arth7='$arthro7', author7='$author7', title7='$title7', checked7='ok' where year='$year' and month='$month'");

Yes, the error reporting helped. I feel stupid that i did not thought it myself

the message

Error - Could not perform the query: Row size too large (> 8126). Changing some columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help. In current row format, BLOB prefix of 768 bytes is stored inline.

I am not sure what i will do next but its a good start. i am not sure about ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED.
if you have anysuggestions i am listening...

i first changed some fields from varchar to text (i had 4 fields varchar(500)) but the result was the same
i also changed the magazine table using

ALTER TABLE magazine
    ENGINE=InnoDB
    ROW_FORMAT=COMPRESSED 
    KEY_BLOCK_SIZE=8;

nothing yet.

It is important to notice that i dont have access to server to change settings in .ini or . conf files

I am waiting for a solution from u. Is there any? What r u proposing to do?

anyone? How can i handle that big loaded row? i googled about it and i found out about Barracuda file format. Is it what i am looking for? Has anyone tried it? can i use it only by code or i will have to enter into php.ini (i can not do that). Any alternatives? i am stack in there and i am waiting for your experience to give me an advice about it..

Member Avatar for diafol

Before wading in here. Are you sure your tables are set upo properly?
They seem big to me - especially as you've got fieldnames like 'author7'... suggesting that you have author1-6 too. If this is the case, then you need to consider redesigning your DB and look up 'normalization'.

I'm assuming 'arth' relates to 'article'?
e.g.

authors

author_id [PK, int]
firstname [varchar, 30]
surname [varchar, 30]

[If articles can have multiple authors...]

article_authors

artauth_id [PK, int]
article_id [int]
author_id [int]

articles

article_id [PK, int]
title [varchar, 150]

[else for single authors...]

articles

article_id [PK, int]
author_id
title [varchar, 150]

[THEN]

magazine

magazine_id (PK, int)
magazine_title (varchar, 50)

editions

edition_id (PK, int)
magazine_id (int)
edition_number (varchar, 10)
...

edition_articles

edart_id (PK, int)
edition_id (int)
article_id (int)
rankorder (int)

Anyway, just a thought :)

Yes, all your assumptions are right. I have a simple small magazine part of a school web page.
The students through forms are filling 7 articles with surrunding data to the table magazine_tmp.

The teacher - admin, checks the articles and if he agrees he pushing a button that moves all the data to table magazine which is the table that the magazine gets the data.

I thought it that way, i did it and it seemed to worked. In my last test i loaded the database with a lot of data i got these errors.

it is interesting the design u suggest but i was wondering if i could use something easier and less time consuming. Can i keep these tables somehow? maybe by compresing data (i dont know if it is possible) or by raising the mount of data a sigle row can get. what about that Barracuda file format. it is applicable in my case?

Member Avatar for diafol

You can certainly carry on with your schema, but it'snot normalized and therefore prone to errors and duplication. Duplication will swell your tables to silly proportions.

AFAIK Barracuda is InnoDB only, other than that I know nothing about it.

With normalized data you could do this, assuming one author per article:

articles

article_id [PK, int]
author_id
title [varchar, 150]
content [blob]

Then that article could be shared amongst many editions, say if you wanted to reprint it. It would also help if you need to search for articles. You now only have to search on one (or two if you include the title) fields instead of 20 (or however many you have - assuming 10 articles per row).

Although you may be able to do it your way if you're able to compress the data, it's not recommended.

I am familiar with 'normalization'. There is no way with the way my schema works to have conflicts and duplications. every article (1, 2..7) has its own form and webpage. It is allowed one article per month. (monthly). if an article exists in a particular month the system updates the changes (no extra insertions). I am not sure if i explained the schemma and system right but there is no way of duplications. It is a very simple magazine and i dont want to make it more complicated (i know that it would be better but it is not my pursuit right now)

The problem is that for every edition (every month) i store all the data needed in a sigle row. I did not know that there is a limit in data i can store in a row. When i found out about that problem (with daniweb help) i thouhgt it would be easy to solve it with some lines of code. It seems that it can not happen.

now i am thinking to 'break' the magazine table into 7 tables one for each article.

article1

month (varchar(3), key
year (varchar(4), key
title (varchar (150))
author (varchar 100)
image (varchar(3)) //checkes if there is an image(yes or no)
ckecked (varchar (2)) //the tecaher administrator checks if the article is ok to publish at the edition of the particullar month and year
athro (text) //the content -i should use blob???

article2
the same...

I know those tables are not connected but i think they dont need to

i am goona wait maybe someone (diafol or someone else) comments all these or better if someone propose a solution to keep the schemma with one table (to store all data in one row) for a day maybe and if nothing happens i will close this thread.

Member Avatar for diafol

Dourvas - I'm sorry if my suggestions aren't too helpful. Breaking the row down into tables will certainly help with the size issue, but as you note, they are no longer connected. I can't see how to make further recommendations. Good luck though and I hope you get it sorted.

thank u very much. you helped me enough. It is obvioous that the solution i was searching is not available.

Member Avatar for diafol

Sorry for bumping this solved article. Anybody else out there with an idea of how to help dourvas? Maybe an alternative schema that wouldn't cut his table to ribbons and force multiple related tables?

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.