At the recent Google I/O conference it was announced that Google crawlers will begin considering urls and links generated by JavaScript, Flash and Flex apps. The issue that this seemingly addresses is that certain links are counted and others are discounted when determining pagerank because of the way they are implemented. As this has never been an issue in my own efforts, I was curious if anyone else has encountered the issue and what your thoughts are on the viability of the new solution from Google.

Well, it is great seeing how Google is making progress in discovering and scanning the contents hidden inside some complex web forms and files, although it has already started crawling the texts inside Flash files but even in the future, using the simple text words and phrases is going to be the recommended approach in page building.

Well, it is great seeing how Google is making progress in discovering and scanning the contents hidden inside some complex web forms and files, although it has already started crawling the texts inside Flash files but even in the future, using the simple text words and phrases is going to be the recommended approach in page building.

There is a possible downside to this as I was reading that some SEOs use JavaScript as a way to hide paid links and still be in Google's good graces. Now that they are crawling JavaScript the Paid Links will be seen and a lot of website's will be in violation of Google rules about paid links without even knowing it. This is all pretty new to me so I am still trying to learn but this seems like Google was trying to help and may actually cause more harm than good.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.