Dev.to is a great site, with a great community. It’s grown rapidly over the years from it’s beginnings in 2016, amassing over 5.2 million backlinks from almost 65 thousand domains and cemented itself as one of the most authoritative community sites around coding on the net.
But why has it’s organic traffic stalled and is it important?
- Has Traffic Plateaued?
- Why Is This Important?
- So what’s going on?
- 1 Dev.to & The Bloated Index.
- 2 – Lonely Pages & Limited Internal Interlinking
- 3 – Tag Pages Need Lovin Too
- 4 – Spam Abuse & Content Moderation
- 5 – Smaller Considerations That Don’t Deserve A Number
- 6 – Curated content to highlight the community posts
- Notes, Exceptions & Caveats
Notes, Exceptions and Caveats: If you find this beneficial and have an open-source or commercial project you want me to take a look at, connect with me on Linkedin. I’ve moved notes and asides to the bottom. It’s there for the curious and interested and linked via the numbers.
Dev.to is first and foremost a community site built with UX/UI in mind. Ben Halpern, Jess Lee, Peter Frank and the Forem team have done a fantastic job at building a safe platform where developers are encouraged to share their knowledge and discuss each other’s content respectfully.
As a corporate enterprise, they must balance a host of aspects such as moderation, technical maintenance, features vs ease of management and the commercial side of the business too.
To be clear, I am an SEO guy and only looking at the Dev.to site in this article. I do not have the wider considerations of the open-source platform and it’s commercial aspects in mind. For example, a change to dev.to for more visibility may not fit with the Forem commercial partners etc.
I am only looking at organic traffic and have no inside baseball as to the direct or social traffic. Ahrefs traffic estimates aren’t great but in general do a good job of showing the general trend and it does look like organic took a big hit during the January ’22 roll back of the December ’21 update.
Organic growth looks steady but it’s not on that explosive path it’s started on.
I would say it’s very important.
I know that Dev.to is an open-source project and that it’s main focus is the community. But an essential part of driving the community is page views, interaction and discussion, it’s the dopamine hit that encourages people to set aside time and share their endeavors. It’s the ability to reach others with similar interests and share ideas.
For Dev.to it’s also a force multiplier. Added visibility in the search engine results pages(SERPS) encourages more people to join and get involved.
Increased visibility for Dev.to puts it on the map(even more) for it’s corporate partners and advertises the platform to new parties, all without a major marketing drive.
The site is making good progress and has accrued a good number of quality links in recent months.
But there are some SEO fundamentals that are holding the site back.
Changing the first two and a half of these issues should see a great improvement and put the site back on to that original trajectory.
Okay get on with it
Problem: A serious problem that Dev.to has is that there are too many low quality pages available for Google to crawl. These consist mainly of comment pages that contain little content and those same comments create a duplicate content issue as they appear on the article page too.
Google’s spiders are constantly roaming the net, visiting pages and making decisions on whether to index a pages content or not. Each time a Googlebot visits a site, there is a cost associated with that.
Now think of the exponential growth of content across the web and think of a bunch of engineers at Google, looking at the problem of resources and costs.
Google has long encouraged sites to have a slim and trim sitemap, with multiple statements about thin content ,low quality doorway pages and the need to no-index pages that aren’t useful to the public.
So the idea here is pretty simple, be Google’s friend and don’t waste their resources and do your bit for the planet too.
Okay, so how bad is the problem?
Welllll, it’s pretty bad.
Let’s take a quick look at Dev.to in the Google Index using Google dorks/advanced operators.
The two operators we’ll be using are site: and inurl:
What’s in the index?
Using site:dev.to we can show the number of results in the index.
This number is unreliable as mentioned in the notes, but it’s useful nonetheless.
Shows the number of tag pages in the index. It’s important to be aware how many there are and I’ll come back to this later.
But here is the kicker. Using the operator below, we are asking Google to show us the results from the site dev.to that include the word ‘comment’ in the url structure.
Now the first 50 results are articles where the url contains the word “comment” but all the other results are user comment pages that follow the url structure below.
Google estimates that there are 961,000 of these pages in it’s index.
I’ve randomly picked a comment page out of the SERPS to give you an idea of what a low quality page that Google doesn’t like looks like.
This page has about 400 words, no real topic, doesn’t answer a question and holds no value without the article that in accompanies.
So if it’s useless to the user(coming at it from the search results page), Google can’t understand it, why is it in the index taking up space?
Which bring us neatly to the next issue these pages create.
All the comments on these discussion pages mirror the comments left under the articles.
Duplicate content has been a contentious issue in the SEO world for years and without rehashing the arguments, there are many(me) that believe it is disregarded or attributed to a single page and then ignored.
Perhaps these discussion pages are a great resource for the account holders to view all their discussions in one place but, there is no reason for them to be accessible via the search engine.
The solution is easy to remedy, adding the noindex tag to the page or via the robots.txt file.
User-agent: * Disallow: /*/actions_panel* Disallow: /users/auth/twitter* Disallow: /users/auth/github* Disallow: /report-abuse?url=* Disallow: /connect/@* Disallow: /search?q=* Disallow: /search/?q=* Disallow: /search/feed_content?* Disallow: /listings*?q=* Disallow: /mod/* Disallow: /mod?* Disallow: /admin/* Disallow: /reactions?* Disallow: /async_info/base_data Sitemap: https://dev.to/sitemap-index.xml
Would there be an immediate positive move in the SERPS?
It’s difficult to say how long it would take for these pages to wash out of the index and to see the rewards. It depends on the crawl budget, how proactive the team are at getting the site recrawled and other factors. The Forem team might even have to wait for the next update(usually November or December) to see the benefit.
Problem: Google rewards experts in a subject matter. Currently Dev.to has an enormous amount of high quality content around different technologies but the interlinking is extremely limited and inconsistent(the links generated to other articles are random to each user/session).
I can hear people saying, “What does that mean?”.
Google rewards subject matter experts. When Google understands that a site or site section is about a singular topic and covers it to exhaustion, Google rewards that site with better positions and more traffic.
Let me take a mundane example from the SERPs to explain.
My choice of keyword is about as boring as you can get. The image above is taken from Ahrefs for the search term “best dehumidifier” using a UK proxy. I’ve highlighted the little affiliate site choosedehumidifier.co.uk in red. In green you can see how authoritative the sites are.
That site is ranking alongside huge national newspapers due to all it’s content being about a single topic, and all the pages being interlinked in a coherent, hierarchical way.
So what’s the situation with Dev.to?
Good question. Authors have the option to cross-link other relevant pages on Dev.to but there is no reminder(visual cue) and no incentive to do this.
The team have done a good job at prompting users to add “Alt text” to images as shown below.
Currently, cross-linking is reliant on the “Read Next” box/module at the end of each article.
And this has a serious issue too!
The “Read Next” module (probably) uses the tags and keywords of users to generate links to recent articles. Refresh the page in the same browser and by magic you’ll get the same suggestions. Check the page in a private browser (with a fresh session) and you’ll get a new set of suggestions.
All this is perfect for the user, but what about Google’s crawler – Googlebot?
As it stands at the moment, every time Googlebot crawls the site, it will see different links to somewhat related topics but these change every time Googlebot comes back to take a look at the page.
How can Google understand these silos of inter-related content without permanent links telling it that these pages are related and share a similar topic?
It gains no bonus for all the specialised knowledge within each of the tags or topics.
Incentivise the user to cross link relevant articles on the site or other good resources.
Visually nudge them with a reminder. The team have done it with image alt-tags, why not prompt the user to reward other good content on the site with a link?
Well, that’s all well and good pal, but what about the million great pieces already on the site?
Yeah, that is a conundrum. But there are a couple of ways you could tackle that.
I’m not going to go into them here as it would add a couple of thousand words and I’m already tired, but basically you are looking at categorising and subcatagorizing content in topic clusters and then adding permanent links in a second module/bot called related content.
After fixing this, how long would it take to see a benefit in search engine visibility?
Applying good interlinking is pretty fundamental to helping Google understand content, and it’s relationship to other content. There should be an almost immediate improvement in rankings.
Problem: The tag pages have been built for UX purposes without consideration of how Google views them. They are great for getting new content crawled but due to the lack of content Google doesn’t understand the context of these hub pages and doesn’t rank them outside of “tag title +community”.
Outside of the dynamic content, there are only about 250 words that are there consistently(each time Googlebot comes to visit) that give an indication as to the purpose of the page.
Contextually cross linking of sub-topic hub pages would also be of great benefit.
So you might have an expandable box at the top of the page instead of the highlighted text, with an article linking to the front-end frameworks, Node, TypeScript and related tag pages.
These pages are the most linked pages internally (via tags)and could be huge drivers of organic traffic with a little love.
It bounces from page 6 to page 3. But you are right, there must be more to it.
And there is.
” via a tag just below the page title. So all that link equity from related pages should place it better than page 6 or 3.
When we link a page in html it’s pretty simple stuff.
<a href="url">link text</a>
The url is the target page and the link text is the anchor. The link anchor gives Google context, it tells Google what the target page is about and this is one of the primary ranking factors.
**So what’s the issue.
Let’s not forget that anchors are used for accessibility for those who are sight impaired too.
** If Dev.to changed how the tags are coded, would they see a jump in SERP position for these tag pages?**
Yes, it would be a change from Google’s point of view as you are giving context to thousands of links. I would expect rankings to jump around for a few weeks before settling into a higher position.
** What about that thing about adding a hidden article in a dropdown box at the top of the tag pages, what would be the benefit of that?**
By adding more static content to the page and cross linking these tag pages, Google should start to rank the page for a whole host(several hundreds) of related keywords.
There are a lot of shady tactics in SEO. Some work for a while, then fall out of fashion as the search engines address the issue and sometimes, inexplicably start working again years later.
But one tactic that hasn’t really gone away is webspam to a tier 1 property.
You are speaking gibberish again!
In the image below you can see a spam profile page that links to a Malaysian Lotto page.
The idea is to build low quality links to this spam profile page and “wash” the link equity through the authoritative domain. In this case Dev.to
In the image below, I’ve pulled some of the top linked pages as seen by Ahrefs.
Starting from the bottom in purple, we have the Lotto site who have built 1,106 do-follow links and 623 no-follow links from 390 different domains.
I’ll not go through the others, and the issue is not as widespread as it is on other platforms.
Ideally, the solution would be to delete this accounts and redirect the links to a 410 server error and add the toxic links to a disavow file. There are ongoing arguments as to whether this is necessary though as Google is pretty good at ignoring spam links.
This tactic still works in some non English geos.
There are a number of smaller areas that could be considered when looking to boost the site visibility on Google and the other search engines. I’ll briefly touch on those here.
Due to an overzealous rate-limit by the dev.to server on my scans, I don’t have hard figures on the level of multi-lingual or articles in other languages present on the site. It may not amount to much as a percentage but as a driver of future growth, dev.to could make a small change to help Google find these.
No hard figures but anecdotally I have come across quiet a bit of Spanish and Portuguese on the site with the Ahrefs traffic estimates showing about 5% from Brazil.
One way to help the search engines digest content on a multilingual site is to explicitly tell it the language and or region being targeted.
To my knowledge there is no way to add a hreflang tag to a Dev.to article in the markup and the html lang attribute is set to as the sitewide default.
*Would this be difficult to implement? Would it be worth it? *
This is one of those questions that I can’t answer, every year the internet is a more diverse place with more and more users coming online across the globe. Perhaps different languages aren’t a priority for the Forem team, or maybe they have a different solution. Smart people make smart decisions.
I think this article is already long enough and I still have one other thing to cover.
Dev.to has some great stuff written each week by the community. It has a points system that allows users to highlight great articles to the admins.
So if everything is great, what’s your point?
These posts are put together for an internal audience. If you want to boost organic traffic you need to cater to an audience coming from search engines too.
I follow you, how would Dev.to do that
It’s pretty simple really. You take a look at the great user generated content and create roundup posts around that tailored to search traffic.
Let me give you an example of two.
SVG & CSS Animation Tutorials [56 Cool Guides]
And you build an article that categorises and introduces the best dev.to has to offer on this topic.
Dev.to articles are often hyper focused on a problem or topic, but rounding up related articles into more generalised categories and serving them to Google would be targeting more exploratory search terms with much higher search volume.
Time for one more example.
This title is going after the search term “secure node.js” and “harden node server”
I think you get the idea.
This is pretty time intensive and would require a lot of curation, how long would it take to see the benefits?
Yeah no pain, no gain. It would take time and some knowledge to curate the best content on dev.to and produce these types of round up posts. But you are broadening the types of search queries that dev.to ranks for, highlighting the great efforts of the community and creating a positive feedback loop of more fresh eyes on the site and content.
Okay that’s it. If you read all the way to here, pat yourself on the back. If you understood everything give youreself a round of applause, SEO lingo can be difficult to get your head around.
Any corrections, comments or complaints reach out to M Foley on Linkedin. And if you have a web facing open-source project, a commercial project or question, please reach out. I’m always working.
Dev.to server rate-limiting: There were a number of technical SEO checks I wanted to run through but the server kept throwing a 429 error every now and then. Things that would have been interesting to know include.
- Amount of articles less than 400 words. (Thin content)
- Number of articles in different languages.
- Max crawl depth(maximum number of hops to a piece of content)
- Number of canonicalised pages.
- Average number of internal cross-linking per article(not in the “Read Next” module)
- How series posts perform in the serps vs regular ole posts.
Ahrefs Traffic Estimates: It should be noted that all traffic estimate sites produce wildly inaccurate data at the best of times. I’d hazard to say that this is particularly true for dev.to due to the nature of the topic. I’d also say that a lot of organic traffic comes from highly specific long tail search queries which will never turn up on these tools.
Advanced Google Operators: The site operator has become less and less useful over the years and without access to the Search Console Data this proves the point but it does highlight the issue and so I’ve included it.
Different Languages: As a side note, cross linking topically relevant articles within the same language would probably be of more benefit then adding the hreflang tags. This of course would be it’s own headache to implement.