SEO Has Fallen Off a Cliff

Hi All,

I’ve been testing this out for a couple of months or so and it looks like the SEO for what used to be a very well performing site has fallen off a cliff.

The original site was in classic ASP and ran perfectly, the SEO was amazing. We had top rankings on most of the keywords on each page submitted. The complete site was regularly spidered and ranked accordingly.

We ‘upgraded’ the site to run on NodeJS, it still runs perfectly but our SEO is now almost non existent. We’ve lost a lot of traffic as a result.

The main menu at the top of the page and the page footer (hard coded links) are being indexed correctly, however, any page that has dynamically generated links does not get spidered by the search engines. This has to be done manually though the Google Search Console.

Here is one page that is affected, every single link on this page used to rank well:

The sub menu on the lower right used to perform very well.

All links on the page above are no longer spidered automatically by Google. To test out, I put the above URL into a random online bot simulator here:
https://www.xml-sitemaps.com/se-bot-simulator.html

All the links on the page are not visible to this bot simulator, nor to Google bot.

Has anyone else experienced this? Does anyone have any suggestions on how I can get this page correctly listed without having to manually handball them into Google?

When moving from a very old site to a new one, it is very important to rewrite your old links to new ones. So your old links should redirect to the new ones. Then Google link gradually learn your new links and you won’t loose your good ranking.

If you just switch to complete new links and the older links just stop working, then you will definitely loose SEO ranking so your new links aren’t so trustworthy yet.

Google indexes just fine dynamic links these days, so should be able to follow them and there is no need for extra sitemap xml. The bot simulator you linked, can’t do that so it is giving you wrong results.

Maybe @psweb can churn in with more SEO tips as he is the expert :slight_smile:

From the google docs: Site Moves and Migrations | Google Search Central | Documentation | Google for Developers

In the Google Search Console you can see which pages are indexed and get a lot of information (you have to verify your site first to get access).

Adding to what @George and @patrick have said, there are a number of issues that hinder web bots from performing their tasks.

Use the developer panel to get a Lighthouse report as in

and the Network tab as in

For the Network tab, I did clear the storage, meaning that the Server Worker was void of data.

In this case, it too nearly 10 seconds to load the content into the DOM. One of the reasons is the fact that your host still uses the HTTP/1 protocol. You should see an enormous increase in speed when using the HTTP/2 (even better: HTTP/3) protocol. Talk to your host or go to another host.

For more, see
https://blog.cloudflare.com/http-2-for-web-developers

Other improvements can be made by following the instruction in Lighthouse.

1 Like

Hey Tom all of your meta tags are blank on all of your pages.

Hmm, they’re not blank for me:

Meta tags aren’t blank for me either.! @Cheese what browser are you using?

Not empty for me either. What URL is that on?

Also these “bot simulators” are not to be trusted, none of them. They render nothing like what Google bot sees. The only trusted way to review what the Google bot sees on your page is to use the Google Search Console and the tools available there.

1 Like

Sorry Tom was AFK.

Chrome: Version 120.0.6099.224 (Official Build) (64-bit)
OS: Ubuntu 22.04.3 LTS

@Teodor

This URL: https://www.rigg-access.com/jobs/

The same issue is reported by the Google Search Bot Googlebot/2.1.

This is the standard Google Bot and not the one used in the Search Console which is the inspection tool bot (they are different), the standard bot does not render dynamic content (aside from server side rendered content). The inspection tool bot does/can render dynamic content (but this is via manual request operation in the console).

This botsimulator thing does not render your page as a google bot. It loads your url by just sending a different user agent - again it’s nothing the google bot sees.

Not sure what could be wrong in your browser, but the data is clearly visible in the source code on both my Windows and Mac computers:

Also i am not sure i understand what do you mean here.
It’s Google themselves explain that their crawler bot renders the js code perfectly fine, and it’s been already many years since this has been happening.
Also Google guys tell you to use the Search Console tools to see how their own bot sees your pages.
There’s no mention of a distinct “indexer bot” separate from the “standard Googlebot”

I’m not going to argue but it also does not display the meta tags. Regardless of OS or browser.

Nothing is wrong with my browser it displays every other website correctly. Node, PHP and ASP. All render the meta correctly. So can’t answer for that.

Then why does Google index the site with unrendered JavaScript in many situations? We had this problem ourselves. Results snippets were exposing unrendered content. I’m not making this stuff up it’s just our own experience which I am sharing. After employing an Engineer to review the situation issues were highlighted which we resolved and Google indexes our content correctly.

Example of unrendered snippets in Google results (prior to implementing suggested fixes):


I’m not the enemy here just sharing what I see, observe, and our own experiences. which don’t just arise out of thin air…

1 Like

Hi @TMR, my comment maybe is not about why sundely dissapear your SEO rank, but it will help to a proper UX and maybe a better indexing by search crawlers, I hope you don’t mind.
I don’t know if maybe is a client requirement, but the use of Google Maps API to render multiple maps on the frontpage looks like not best way. I suggest that instead on call each map with coordinates, just capture the image url from the same api and store locally in server (asuming that each " Rope Access Jobs is added in a backend"), and replace all that live maps with static image. Only in the details on each one you can maybe use the live map.

1 Like

I have had the exact same issue as that not only with Google but also with social media shares.

I agree entirely, thank you for that. Theres definitely a better way to do that, I’ll take a look and see how I can improve this.

If I’m not mistaken, Google’s crawl frequency of rendered JavaScript is lower than a standard crawl.

It’s a dangerous mistake assuming JavaScript links will be rendered, but that’s the way Wappler was designed.

The solution is either to generate a sitemap, or more preferably, render the links server-side using server-side bindings.

3 Likes

With sites that have thousands of dynamically generated pages, business directories, blogs, stores, etc. I believe there is definitely an issue with indexing and rendering. If the pages are static with dynamic content they render a little better but if they are defined (requested) with a specific parameter in the request (and share the same page name) the problem arises…? I don’t know I’m just a dumb developer!

1 Like

@Cheese I agree fully, there’s definitely more issues with fully dynamic pages being rendered/indexed with Wappler.

For my particular case I think this would be a potential remedy:

Had a discussion about that @TMR :

1 Like

It’s not a Wappler thing, it’s how the front-end js frameworks work in the modern internet.
Google has no issues with JavaScript and rendering it. But it has issues with code errors, not following best practices etc. It’s all explained in the Google SEO documentation.

Useful links for your guys:

as well as: