Hosting on a static website, no database? You might still be able to achieve something close to what you want with server-side includes if it has been enabled.

Cutting and pasting 1.6 million lines of data? I've never tried that before but something tells me it's not going to work :-o

One issue lies in the way text is stored within a PDF. Strings of text are typically broken up in to arbitrary chunks, and not necessarily stored in the correct reading order. Trying to determine which chunks belong to which column, paragraph or sentence can be a challenge at times and occasionally bordering on the impossible.

Unless the PDF has been previously tagged of course. Tagging the contents of a PDF can offer a way to extract data in the correct reading order.

If resorting to the original data that generated the PDF is not an option, try exracting the data with a third-party component. Search the web for "VB.NET PDF component" and you'll find there are several on the market. Different components are likely to produce different results because text extraction is not a trivial task, so it's worth trying a few out to discover which one works best for you.

Which of their SDKs are you using? Note that SecuSearch SDK Pro for Windows features 1:N matching.

rproffitt commented: +1 matching. +8

A way to match fingerprints is already provided with the SDK, apparently...

"SecuGen SDKs make it quick and easy to integrate SecuGen fingerprint scanning, template generation (minutiae extraction), and template matching functions (both one-to-one and one-to-many) into almost any type of sotware application.", Source:

Does the documentation not include some example code?

savedlema commented: the SDK sample code do not talk about 1:M identification. +2

You won't catch me saying this often but may I suggest you Google it?

My reasoning is this: any SEO expert with a greater understanding of ranking factors than another is more likely to out rank them in search results. Q.E.D. So the best SEO tools are among those found at the top of Google's search results. Have you tried the first page?

To find files use the DirectoryInfo.GetFiles method. It returns an array of FileInfo objects. Use the Random class to generate a number that can be used as an index for the FileInfo array. Becareful to specify an index that is within bounds.

Selecting pictures at random will mean that occasionally you'll see one appear multiple times in succession, which might not be quite what you're expecting. If that's the case you'll need to have a think about defining your algorithm to prevent this from happening - a trivial problem which I'll leave for you to figure out. Have fun.


The issue could be a faulty cable, monitor, or video adapter. See if you can deduce the cause by replacing each of these one at a time. Turn off the monitor and computer between tests to avoid damaging the equipment. Look for worn or corroded connectors and broken soldered joints.

If you need further help, start a new forum thread and you'll get a faster response. This one is very old.

The ratio of 'do' follow to nofollow links is not a useful indicator of site quality, as far as I know.

More important is the rate of increase, and it seems likely to me that do follow and no follow share different rates. Links acquired through social media or guest blogging are typically nofollow links. They're easy to come by, enjoy little SEO value, and can probably be accumulated safely at a faster pace. 'Do' follow links on the other hand are generally harder to earn, require more effort, and so accumulate more slowly.

If your link profile suddenly saw an increase in 'do' follow links, without a corresponding increase in nofollow links, I think that would look very suspicious. To play it safe, let your naturally gained 'do' follow links dictate the pace at which you use social media to gain no follow links.

No problem. Would you like to share your opinion with us?

Sonipat, your post appears to have come from another forum. See this one, circa 2007.

Please refrain from copying the work of others and make yourself aware of Daniweb's rules.

Scraping content typically violates copyright laws and damages the good reputation of Daniweb. Additionally questions like the one you've asked will waste the time of anyone willing to respond. Please stop.

It's my understanding that buying and selling links is acceptable to Google, providing they don't pass PageRank. However if the intention is to influence search results then you'll be violating Google's guidelines. If (when) you get caught, don't be surprised to find your site missing from search results.

To prevent links from passing PageRank they need to be marked as 'nofollow', e.g. <a href="" rel="nofollow">Click here</a>. Alternatively this can be done at page-level with the 'robots' meta tag.

Google uses 'nofollow' to help identify paid links, as do Bing and Yahoo. There are plenty of legitimate reasons for buying and selling links, such as advertising, or in exchange for goods or services. (BTW, if Microsoft would like to gift me an XBox I'd be delighted to blog about how wonderful it is).

While back links marked with nofollow have little or no SEO value, let's not forget they still have the potential to increase traffic. And if placed on highly relevant sites, you might expect to see an improvement in bounce rate - another metric that provides a strong signal - but target the wrong audience and your bounce rate will suffer horribly. So it's important to exercise a level of discretion over where links appear.

Don't buy links in bulk. If a seller offers 1000s of 'do follow' links on high PR sites, run a mile.

Google's Webmaster Guidelines is usually a good starting point.

So, Google Fetch returned an HTTP 404 error?

You can rule out issues with robots.txt. The file tells bots which resources it should not request. You simply would not receive an HTTP response (or error) because a well-behaved bot will not make requests for blocked resources.

The robots meta tag can also be ruled out. The tag is embedded in an HTML document, and it can only be read if the page is retrieved, which would mean an HTTP 2xx error, document found.

It's probably not a firewall issue because other pages on the site can be accessed, or a permissions issue as that would result in an HTTP 403 Forbidden error.

Sitemaps tell bots where to find resources on a host. If the sitemap contains errors, such as bad URLs, it will cause the web server to return an HTTP 404 Not Found when a bot attempts to download the resource.

I'd take a closer look at the URLs in your sitemap. Watch out for any unusual characters that might cause bots to truncate URLs. For example I have known GoogleBot to read URLs like[1]/ as , unless the square brackets were encoded as %5B and %5D.

Another possibility is a misconfigured redirect, but then why would that affect search engines and not all visitors to your site? If you do find a redirect is responsible it's probably safer to remove it. Search engines expect to see exactly the same content as shown to human visitors of ...

Search for SMS gateway.

You'll discover there are many services offering SMS messaging, which you can connect to through their APIs. I bet most of these will be documented for PHP.

Alternatively you can connect directly to a network with either dedicated hardware or just a mobile phone.

Wow, angled tabs. Amazing.

Call me a luddite but when you have a dozen tabs open on a netbook, square tabs are the way to go! Rounded and angled tabs just waste valuable screen space.

Smells like spam. The article appears to have been scraped from another site, with the addition of a link on the words 'brand new icons'. Please feel welcome to correct me if I'm wrong.

Is this the original? It seems to be copyright material. Did you obtain permission?

Please do give authors proper attribution if the work is not your own.

Please be aware of Daniweb's rules, specifically the posting of editorials already published on another site.

On a positive note, thank you for bringing the article to our attention - it's mostly relevant, and welcome to the forum :-)

It should be noted that messing around with the registry can potentially cause your system to stop working properly. Before editing the registry it's normally advisable to create a backup.

But, as I learned in school for software: "Never mess with the registery unless you are totally sure what you are doing."

Yes, such advice is appropriate if you don't want people to learn. How can you become totally sure you know what you're doing without gaining practical experience?

Why teachers don't want students experimenting freely on school computers is understandable. But we're talking about your computer here, aren't we? I think you can safely handle this one. Changing the registered owner is about as trivial as registry editing gets.

mattyd commented: Not good -2

Unfortunately I don't yet have access to Windows 10, but here are a couple of things you might want to try that worked for earlier versions...

To change the 'registered owner', open the registry editor (regedit) and navigate to
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion
Look for the 'RegisteredOwner' key.

To change an account name, instead of going through 'User Accounts' in the control panel, try opening a command prompt and typing control userpasswords2. With a bit of luck a more advanced user account dialog will appear.

Hi squashspark, and welcome to DaniWeb.

To get at the data inside your Coverage node you could try selecting it with an XPath query, then selecting the nodes you want in the context of the current node. For example, in the code below line 4 selects a PAGE node, then line 7 uses the XPath query "./PAGENUMBER" to select a child node. Does this help?

    Dim xmlDoc As New XmlDocument
    Dim nodeList As XmlNodeList
    nodeList = xmlDoc.SelectNodes("/JOBS01_2016/JOB_01_09_2016_20_50_13/COVERAGE/PAGE")
    For Each pageNode As XmlNode In nodeList

BTW, it's generally a good thing to keep code examples as short as possible. You'll find it helps to narrow down issues, and you're also more likely to get a quicker response.

Rendering in the browser occurs after an image has downloaded. Usually it's the downloading of the image that takes time. If you wish to reduce the amount of image data transferred on your site here are a few things you can try:

  • Resample images to reduce their resolution (recommend 72 DPI).
  • Resample images to reduce their dimensions.
  • Try different file formats. Generally JPEG for photos, GIF for line drawings, PNG for both.
  • Try different levels of compression.
  • Reduced color palettes.
  • Try graphic file optimizers, e.g. JPEG optimizer, tiny png, and others.
  • Use a content delivery network (CDN).

For general performance advice see Yahoo's "Best Practices for Speeding Up Your Web Site". I haven't checked whether the article has been updated for HTTP 2.0 yet, which will make some recommendations perhaps less relevant when it's enabled on your server, but still worth a read.

diafol commented: Great +15

I bought some natural links and...

Bought links are not 'natural'. Make sure they're tagged as 'nofollow' if you don't want to risk getting penalized by search engines.

Unsure if this'll work but I'm attempting to block the upgrade on an old netbook by restricting permissions on the hidden folder that Microsoft will attempt to create for the download, which I believe is C:\$Windows.~BT

Any thoughts on a better way to permanently block this upgrade?

rubberman commented: Install Linux. Only updates are the ones YOU want! +13

IIS 6 appears to use Negotiate/NTLM by default. If you want to disable Negotiate then I would try setting the NTAuthenticationProviders property in the metabase to "NTLM", instead of "Negotiate/NTLM" or undefined. I couldn't see this as an option within IIS 6 manager so I guess you'll need to use an admin script.

The following article details how to enable Negotiate/NTLM on IIS 6, but hope it provides enough clues for disabling it too:

Further reading: Integrated Windows Authentication (IIS 6.0)

Does your computer meet the system requirements for windows 10?

The quickest and easiest way to generate a PDF is to make use of an existing PDF library. Search the web for "PDF library for PHP" and you'll discover there a number of libraries for PHP out there, both paid and free. Pick one that suits your needs.

Policies can sometimes restrict what you're allowed to install on a server. If that's the case, using an online PDF service might be an option, assuming you're not working with sensitive data. Their APIs are normally language agnostic - whether you're using PHP or some other language shouldn't be an issue.

Creating PDFs from scratch using just PHP is the hard way and generally best avoided, but if you're interested in the format the PDF reference can be found here on Adobe's site.

Looking at the results from rproffitt's query, the tenth one down seems promising: Windows Authentication with Chrome and IIS.

In summary, check that 'NTLM' appears before 'Negotiate' in the list of Windows authentication providers for your site. Open IIS manager and navigate to your site > IIS > Authentication > Windows Authentication, and then select 'providers...' from the Action panel.

I believe you need to add the native DLL to the project in Solution Explorer. However selecting 'Include in Project' alone is not always enough. Depending on the filetype you may need to manually set the 'Copy to Output Directory' property. I don't know why Visual Studio behaves this way, perhaps it's a bug?

Setting the property within Solution Explorer can be a laborious task. If you ever need to set the property on multiple files you'll probably find it quicker to edit the project file directly.

Which version of VS are you using?

Requests for are being 301 redirected to, hence the mixed content issue.

The resource is also available via HTTPS so you could possibly link directly to, as long as you're confident it won't change.

Try changing the event listener on line 7 to something like...

openCtrl.addEventListener( 'click', function(ev) {
   if (classie.has(el, 'services--active')) {
         classie.remove(el, 'services--active');
    else {
         classie.add( el, 'services--active' );
  } );

You should find your open services button will now toggle. The event listener for the close services button, lines 12-15, can be removed.

If you're relying on the HTTP referrer header to prevent hot linking there are a couple of issues you might need to think about. The header can be spoofed. And it's not uncommon for the referrer to be blank, such as when someone bookmarks a resource.

I haven't attempted to block hot linking myself, but what I would try doing is setting a domain cookie so that at least you know they've visited your site. Then when they request the download, their browser will include the cookie in the request header, which you can test against.

If you need to protect resources more thoroughly, consider implementing a way for users to authenticate themselves, such as with a username and password, and/or restricting access by IP address.