Dani 1,917

RuMedia, I think what the OP is saying is that he’s aware of that, but he doesn’t mind if they go stale.

What I do is I use an in memory cache such as Memcached (non-persistent) or Redis (persistent) where I store the results of the query for up to X minutes. They are both key-value pair caches. I then query memcached unless there’s a miss in which case I query the database again.

Dani 1,917

Hi there! Nice to meet you.

Dani 1,917

Thanks for letting me know. I think, if memory serves, that the recommended items picks from items you have access to (and, as a mod, you have access to deleted items). However, seeing deleted items is not a good experience for anyone, so I’ll have that fixed.

Dani 1,917

At this time, we only email you about updates to specific articles you've specified you want to watch.

However, there are some other tools at your disposal:

You can fetch an RSS feed of all new topics tagged with java. You can receive notifications by plugging this RSS feed into an RSS reader. Or, you can use something like IFTTT to send you alerts, text messages, tweets, facebook posts, or even flicker the lights in your smart home, etc., each time the RSS feed has been updated.

Additionally, if you want to do something creative and roll your own RSS parser, our RSS feeds use PubSubHubbub, which means you can get notified for updates via PUSH notifications instead of having to poll.

You can use our JSON feed to fetch all topics tagged with java, sorted by when they were last posted in. You will, unfortunately, have to write your own parser / reader / notification system (or perhaps one already exists somewhere on the web, probably a Github repository somewhere that would only need slight modifications to make it work).

Our complete API documentation is available at https://www.daniweb.com/welcome/api

John_165 commented: Thanks,but this sound complicated to me >< +0
Dani 1,917

Are you sure you accidentally aren't running the cronjob from multiple users on the server? Two users might have the same crontab file accidentally. Just a thought?

Dani 1,917

Yeah, traffic has been poor recently, as has activity. There was a strong uptick that lasted for a few months after our relaunch back in October, then it plateaued, and now the past few weeks have seen a decline.

I'm not giving up on user matching quite yet. I'm actually doubling down on it right now. We'll see what happens.

Dani 1,917
DaniUserJS commented: Perfect. Thank you. +0
Dani 1,917

Sorry, I don’t know how to do that off the top of my head. If the page contents have changed, you can use a sitemap file. But I don’t think googlebot wants your sitemap to contain dead pages. I think just naturally wait for them to come around and recrawl you.

Dani 1,917

That robots.txt file looks correct to me. But will not deindex from what I've read.

I think you've misunderstood what I was saying. A robots.txt file, alone, will not deindex. It was imperitive that userunfo was able to get all the pages to return a 410 Gone HTTP status. The advantage to robots.txt is that Googlebot won't be able to crawl spammy URLs on a domain, consider them spammy, and negatively affect the quality score of the domain as a whole. Therefore, it helps preserve the integrity of the domain (which can take months or years to recover from) while figuring out how to 410.

Dani 1,917

And unfortunately that brings us back to how the original post was snipped to remove a crucial part of the question. It had some foul language as well as links to spammy pages, so I'll try to use example URLs instead:

https://www.example.com/?foo=bar.html + many more

I want somehow to block indexing all links from:

Check out https://geoffkenyon.com/how-to-use-wildcards-robots-txt/ and you can see there that you can do something like:

User-agent: *
Disallow: *?foo=

(At least, I think so. Please correct me if I'm wrong.)

rproffitt commented: That robots.txt file looks correct to me. But will not deindex from what I've read. +15
Dani 1,917

"using a robots.txt won’t remove pages from Google’s index." was his point and again why I wrote no.

rproffitt, your quote is taken out of context. robots.txt will not remove valid pages from Google's index. If you have a webpage that you want visitors to be able to access, but you simply don't want it to appear in Google's index, then adding it to your robots.txt file won't remove it from the index.

However, when dealing with the issue the OP is suffering from, this is not the solution to the problem. He needs to make sure all the spammy pages no longer exist on his domain, and then use robots.txt immediately in order to do damage control so as to not get hit by a Google algorithm update that doesn't like domains with lots of spammy pages.

Dani 1,917

rproffitt, I still believe that robots.txt is the best solution here.

It seems as if malware has created many spammy pages with have subsequently been indexed by Google. The article you links to suggests the best way to deindex pages that you want Google to still be able to crawl and want visitors to access. In such a case, I would agree with John Mueller, who is Google's ambassador to the SEO community.

However, I would not recommend that strategy in this case. Basically the strategy involves manually adding a <meta robots=noindex> tag to every page, and then updating the sitemap to tell Google to recrawl the page soon to notice the change.

The problem with doing that, in this case, is firstly, I would hope that the spammy pages have already been permanently removed. Secondly, if for some reason they haven't been, manually modifying every spammy page doesn't seem like a reasonable solution ... if it were that easy, one would just as easily be able to remove the pages altogether or change their content to be non-spammy.

Instead, a robots.txt file is the best way to quickly tell Google to not crawl that section of the site. This is imperitive when the pages are spammy, and you don't want Googlebot to ding you for having spammy pages on your domain. If the pages no longer exist, they'll eventually fall out of the index over time, don't worry about that.

Dani 1,917

I think he’s trying to show the format of the links to see if there is regex or something that can be used to mass deindex them. I believe wildcard characters are included. I can’t provide more advice without seeing the format of the original links that were snipped unfortunately.

Dani 1,917

Sorry for not updating the database. I investigated the issue, and it looks like I already had a little comment in my code saying that it was inefficient to do the extra database lookup to undo reputation on-the-fly, and instead we are just letting the cron counters recalculate this number daily.

John_165 commented: Noted :) +0
Dani 1,917

Those are notifications about incoming instant messages, hence new message notifications, as opposed to forum post notifications. Post notifications are always instant.

Dani 1,917

Darn, yes! I'll get to this tomorrow. Promise.

Dani 1,917

I think it's realistic for someone of any age to attempt a new career in programming. My criticism is mainly of my experience with people who have come out of coding bootcamps.

6 months of time is not realistic to really grasp all but the very surface of programming, and that's assuming you're putting in 12+ hour days building up your portfolio. Check out this article I just stumbled across: https://techbeacon.com/app-dev-testing/bootcamps-wont-make-you-coder-heres-what-will

Dani 1,917

I'm curious as to why you think your age stands in the way as a data analyst, but it won't as a programmer?

I hate to say it, but I think that ageism is very prevalent in web development and mobile apps, which are where the focus of coding bootcamps are. Web apps today put a very heavy emphasis on catering to millennials, and so businesses want to hire people who "get" their target demographic.

Dani 1,917

If you have previous experience with databases, I don't see why it would make sense to shift completely away from that and start at ground zero as a complete newbie. Every website today is only as strong as its database, and data analytics is a huge industry, especially with machine learning powered by data mining, etc. According to Glassdoor, data analyst salaries are about $100K and database engineers average $150K/year around San Jose, CA. (Albeit the cost of housing is tremendous here as well.)

Dani 1,917

1) Is it realistic to even consider this as a career for the next 10 years for someone my age? Assuming it is,

Programming is a huge industry. Everything from writing hardware drivers to designing games for gaming consoles to building a website to on-page search engine optimization to front-end UI. You can get a full-time job in Silicon Valley or you can be an independent consultant working out of your home office or anything in between. They all require different sets of skills, have huge discrepancies in salaries, and some might be more suited to you than others, depending on your current skillset and interests.

2) Are there some coding schools that you can comfortably recommend that are accepted by the IT industry?

A lot of people love coding schools but I am one of those people who is not quite sold. I think they're great to teaching you how to build a snazzy website for a small business or a simple mobile app in a very short amount of time. That's what it's all about ... making you hireable as quickly and efficiently as possible. However, what they don't tell you is that without the background of a computer science degree, you will lack all of the mathematical and analytical experience required to focus on big data sets (e.g. working with billions of records), fine-tuning performance, etc. It's my personal experience that coding schools teach you what's necessary to land your first small-time consulting gig, and ...

rproffitt commented: That's a quality reply. Great example. +0
Dani 1,917

So it looks like you are storing passwords in plain text in the database. NEVER. EVER. DO. THIS. It is incredibly insecure. Please look into PHP's password_hash() function.

Dani 1,917

That’s the thing. We need to see your register and login code in order to give you the code for password change.

Otherwise, without it, the best we can offer us the pseudo code I provided in my first reply above.

Dani 1,917

So I’m confused. You’re asking for help writing the PHP code that can be used to do a lost password reset, but you don’t have code for a signup or login?

Dani 1,917

OK, so you're connecting to MySQL via PDO. I'm personally not familiar with PDO. Is there a reason you're not using something like MySQLi?

It seems as if you don't have very much experience at all with web development. Did you write this PHP yourself? Where is your PHP code to log in? Your database schema? Is this part of a larger PHP application?

Hasan_10 commented: . +0
Dani 1,917

You need to provide some more information in order for us to be able to help you. I see here you are giving us an HTML form that asks a user for an old password, and to enter a new password twice. I understand what you want to do is update the password in the database, when the form is submitted. However, you are giving no insight to what your PHP application code currently looks like, what database you're using, how passwords are stored in the database, etc.

Basically the steps that would be involved would be:

  • Retrieve the old password from the form
  • Check to see if the new password and repeat new password fields are the same
  • If they aren't a match, show an error that the passwords are not the same
  • If they are a match, compare the old password to the encrypted password for the user in the database
  • If they aren't a match, show an error that the old password is incorrect
  • If they are a match, encrypt the new password and overwrite the encrypted password field in the database

Now, how that algorithm actually gets translated into PHP code has a lot to do with what PHP framework (if any) you're using, what database you're using, what library you're using to connect to the database, the database schema, etc.

Dani 1,917

This should be rather simple to do with jQuery, unless I'm misunderstanding what you're asking. First retrieve how many pixels from the top of the webpage the attribute you want to scroll to is. Here, we select the first element that has data-attr=foo and then calculate its position in the DOM.

var position = $('[data-attr=foo]').first().offset().top;

Now, we can scroll to that position.

$('html, body').animate({
    scrollTop: position
Dani 1,917

Bostjan attempted to upload these files but it didn't work. For the sake of testing site functionality, here they are.

Dani 1,917

What is the specific MySQL statement you're trying to execute? What if you just try SELECT * FROM category on its own?

Dani 1,917

I've never used Woocommerce before so I probably can't assist much, but if you attach a screenshot of what the admin page looks like, maybe I could help figure something out?

Dani 1,917

Instead of echo, try using var_dump($statement). This will spit out the mysqli_stmt object.

You got nothing for $error because you set $this->error in the catch block, which is different than $error.