Reviews Reports Overview
This note maps the review reports in datasets/reviews-search/output back to the original prompts that asked for them.
It now reflects the rebuilt 40,506-review corpus and the current refreshed report set, rather than the earlier smaller run.
1. Access Issues In The Review Corpus
Prompt
good work. Using our new tool, tell me how many reviews as a percentage of all reviews mention main access-related issues, and the most common access issues raised. Output to a new md file as an example of what this tool can do, including examples by practice and issues, not a data structure as such, but a report on meaning, that tries to use real examples and their prevalance to evaluate how big of an issue access is, with enough context to understand the problems patients face. Use a reading ease level about the same as the source reviews, avoid jargon except where reviewers use it.
Current file
access-issues-report.md
Current source basis
- ad hoc fulltext/index queries
- later extended with an exclusion section
Current discoveries
- The refreshed report finds
18,321reviews,45.2%of all reviews, mentioning a main access route or access-linked follow-through issue. 7,033reviews,17.4%, use stronger complaint-shaped access language.5,080low-star reviews,37.3%of all low-star reviews, sit inside that stricter access basket.- Phone access, appointment scarcity, reception barriers, digital front-door failures, weak follow-up, and exclusion after failed access remain the core recurring issues.
Access Issues In The Review Corpus
This note is a fresh read of the expanded local fulltext index over 40,506 Google reviews.
Access Issues In The Review Corpus
This note is a fresh read of the expanded local fulltext index over 40,506 Google reviews.
I treated access in two ways:
- a broad mention pass that catches reviews talking about appointments, phones, reception, online routes, callbacks, referrals, results, or prescriptions
- a stricter complaint-shaped pass that looks for more direct access-failure language rather than every passing mention
That matters because the bigger corpus contains many more positive reviews that still mention access when it works.
Headline
Access is still one of the biggest things patients talk about.
In the expanded corpus:
18,321reviews,45.2%of all reviews, mention a main access route or access-linked follow-through issue at all7,033reviews,17.4%of all reviews, use stronger complaint-shaped access language5,080reviews,12.5%of all reviews, are1or2star reviews inside that stricter access basket- that means
37.3%of all low-star reviews now read as clear access complaints
So the bigger dataset changes the proportions, but not the conclusion. Access is still one of the main ways patients describe a practice failing them.
What Patients Mean By Access
The larger corpus makes one thing clearer than before. Patients do not mainly use access to mean one blocked appointment. They describe a chain:
- the appointment is hard to get
- the phone route is slow or unusable
- reception can make the problem worse
- the online route can close, fail, or confuse
- even after first contact, the next step may still not happen
That is why the issue buckets overlap so much. Many reviews are about a route through the practice that breaks in more than one place.
The Main Pressure Points
Appointment shortage and delay
This is still the clearest access complaint in the corpus.
The stricter pass found 2,388 reviews in this category, including 1,763 low-star reviews.
The familiar pattern is still there:
- ring at
8am - wait or keep trying
- finally get through
- find there is nothing left
- get sent elsewhere instead
Recent examples:
"How to get an appointment is unclear."
Natalie Fernandez,Ribblesdale Medical Practice,2 days ago"I tried to book a GP appointment today for an issue that's lasted over 3 weeks ... They messaged to go to the pharmacy instead."
Maud Boyron,Northenden Group Practice,3 weeks ago
The old reviews show this has been going on for years. The newer reviews show it has not gone away.
Phone access failure
The stricter pass found 1,741 reviews here, including 1,413 low-star reviews.
This remains one of the most recognisable access stories in the whole dataset. People talk about:
- long holds
- calls ringing out
- queues that barely move
- trying again and again through the day
Recent examples:
"Every time you ring them your on hold for nearly hour."
maureen astall,Northenden Group Practice,2 weeks ago"I have called repeatedly throughout the day, and the call either rings out or ends automatically."
Natasha Rizwan,Werneth Medical Practice,Edited 2 weeks ago
The larger corpus strengthens this rather than diluting it. Phone access still looks like one of the main places where a patient first collides with the system.
Reception as part of the access problem
The stricter pass found 1,293 reviews in this category, including 1,202 low-star reviews.
Patients do complain about rudeness, but the access issue is not just tone. In these reviews, reception often sounds like the point where:
- the patient is redirected
- the request is not taken seriously
- the rules are enforced without explanation
- the patient feels brushed off before care even starts
Recent examples:
"Some of the receptionist staff are extremely rude"
Jesika Portelekyova,The Oaks Family Practice,3 weeks ago"The form was not working but the they did not care one bit."
Sam,Beech House Medical Pract,a month ago
From the patient side, reception is often experienced as part of access itself, not just as customer service around the edges.
Follow-through after first contact
This is now the biggest category in the strict basket by raw count: 2,756 reviews, including 1,723 low-star reviews.
That matters because it shows access does not end when the patient finally gets through.
The reviews here talk about:
- missing callbacks
- results not coming back
- referrals not being sent
- prescriptions being delayed or blocked
- patients having to chase the same thing again and again
Recent examples:
"I needed urgent repeat heart, stomach meds and high dose of anti-depressants on repeat ... I was refused"
XXX NO-ONE,Knowsley Medical Centre,2 weeks ago"My mum came out of hospital a week ago ... A week later still unsuccessful in person, on the phone"
Paul Seville,Brooklands Medical Practice,a week ago
This is one of the clearest gains from rereading the fuller corpus. The route into care and the route after contact are tightly linked in patient language.
The digital front door
The strict access pass only catches the sharper digital failures. On that narrow basis it found 118 reviews, including 110 low-star reviews.
That does not mean digital access is small. It means the stricter access basket is only catching the sharp end of it. The separate digital reports pick up a much bigger online-access layer once more generic website, app, and form wording is included.
For access specifically, the complaints here are:
- the form is down
- the website route is unclear
- the online path is time-windowed or closed
- patients are pushed to the form but the form does not actually work
Examples:
"The practice require an online form to be completed however, the form has not been working for over a week now."
Clare Wiener,The Royton & Crompton Family Practice,a year ago"The form was not working but the they did not care one bit."
Sam,Beech House Medical Pract,a month ago"the online service doesn't work"
Estefa Lemo,Stockport Medical Group,5 months ago
The older reviews show the rise of these routes. The newer reviews show that online access is now part of the normal front door, not a temporary extra.
When Access Turns Into Exclusion
The exclusion layer is still smaller than the main access buckets, but it is real enough to matter.
The stricter pass found 281 reviews in this area, including 210 low-star reviews.
Some useful markers from the bigger corpus:
35reviews mentioncatchment326reviews use register, change, or de-register wording of this kind84reviews use "got worse" type follow-up language- only
3reviews turned up clear "tried again and it was fine" type follow-up wording
This is the part of the corpus where patients stop sounding merely frustrated and start sounding trapped, worn down, or done with the practice.
Recent examples:
"I have now registered at another surgery due to the very poor and unprofessional attitude"
Andrew James,Padgate Medical Centre,Edited 3 days ago"I strongly advise de-registering"
Erandi Samaraweera,Alkrington Junction Practice,2 months ago"we are de registered so quickly from this surgery"
Aziz Ibraheem,Bolton Medical Centre,3 weeks ago
The wider corpus still does not show many neat recovery stories after failure. It shows much more evidence of patients trying to escape, being pushed out, or sounding resigned.
What The Bigger Corpus Changes
The expanded dataset does not overturn the earlier reading. It sharpens it.
Three things stand out more clearly now:
-
Access is still huge, but it is better described with two numbers, not one. Broad mention:
45.2%of all reviews. Stricter complaint-shaped access:12.5%of all reviews and37.3%of low-star reviews. -
Follow-through problems matter more than they first looked. In the review text, access often includes whether the callback, referral, result, or prescription actually happens.
-
Recent reviews keep showing the same pressure points.
Bottom Line
Access is still one of the strongest themes in the whole review corpus.
The bigger dataset makes it clearer that patients do not mean one thing by access. They mean:
- getting an appointment at all
- surviving the phone queue
- getting past reception without being brushed off
- navigating a digital front door that may or may not work
- and then getting the callback, result, referral, prescription, or next step they were expecting
And for a smaller but important group, they mean something harsher than frustration: they mean feeling shut out, stuck, or driven to leave.
2. What The Review Corpus Shows
Prompt
great work. Write another report, aside from access issues, how do our reviews look across the board? gimme like a full professional analysis of what's in our dataset, good and bad, using our new tools to explore. Think of this as the overview of exactly what patients complain about, and how they talk/write about it. Like set out what we're going to need to know to begin representing the needs of these patients more clearly than the patient survey usually can (you might even want to look at our google reviews datatset from the perspective of the real questions from the national patient survey, which does have some pretty narrow interests, so seeing where the real google reviews diverge from the patient survey, and how they look at problems in different ways) is useful.
Current file
reviews-corpus-overview-report.md
Current source basis
- ad hoc fulltext/index queries
- manual bucket scans for positive and negative themes
Current discoveries
- The corpus remains sharply polarised:
30.9%of reviews are1star and58.2%are5star. - Access still dominates complaints, but staff attitude, follow-through, digital front-door experience, and clinical trust all stand out strongly.
- The refreshed overview now also pulls in the newer digital and activism layers:
2,586digitally signalled reviews and1,625activism-adjacent or public-warning reviews. - The report argues that reviews show route, sequence, emotional cost, public warning, and exclusion more clearly than the national patient survey.
What The Review Corpus Shows This is a fresh read across the whole indexed Google review corpus, not just the access slice.
What The Review Corpus Shows
This is a fresh read across the whole indexed Google review corpus, not just the access slice.
The corpus now contains 40,506 reviews across 404 practices. It is a local fulltext read of review text, not a survey and not NLP. That matters because Google reviews are messy, self-selecting, and emotionally uneven. But that is also why they are useful. They show where care breaks badly enough, or works well enough, that people feel pushed to say so in public.
The Big Shape
The first thing the larger corpus shows is still the same thing the smaller one showed: the middle is tiny.
12,513reviews,30.9%, are1star1,102reviews,2.7%, are2stars910reviews,2.2%, are3stars2,405reviews,5.9%, are4stars23,576reviews,58.2%, are5stars
So patients still do not mostly write like survey respondents. They write when they feel sharply let down, or when they feel someone really helped them.
That split is visible at practice level too:
388of404practices have both low-star and high-star reviews- only
5have low-star reviews without any high-star reviews - only
11have high-star reviews without any low-star reviews
That means the live picture is rarely just "good practice" or "bad practice". Most places look mixed and uneven from the patient side.
The GTD-managed slice is still much harsher than the wider field. In that subset, 627 of 830 reviews, 75.5%, are 1 or 2 stars, while only 185, 22.3%, are 4 or 5 stars.
What Patients Mostly Complain About
Access is still the biggest single story, but the wider corpus makes it clearer that it is only the front end of a longer complaint.
The refreshed access report found:
18,321reviews,45.2%of all reviews, mentioning a main access route or access-linked follow-through issue7,033reviews,17.4%, using stronger complaint-shaped access language5,080low-star reviews,12.5%of all reviews and37.3%of all low-star reviews, sitting inside that stricter access basket
But once you step back from access on its own, three other negative themes keep rising to the surface.
Staff attitude and respect
Roughly-worded complaints about staff tone and treatment remain one of the biggest non-access themes.
A refreshed plain-language pass found 3,647 reviews using staff-attitude language of this kind, including 3,295 low-star reviews.
Patients are not only saying staff were rude. They are saying the rudeness matters because it comes at the point where they are already dependent on the service.
Examples:
"Incredibly rude and unhelpful."
Sarah Malone,Lostock Medical Centre,a year ago"Rude, unhelpful, ignorant and condescending receptionists"
Richard Seddon,Dalefield Surgery,5 months ago
This is one of the clearest places where the reviews go beyond a survey tick-box. "Not helpful" in survey language becomes very concrete here.
Follow-through, admin reliability, and whether anything actually happens
The wider corpus also makes follow-through look more central than it first did.
A refreshed pass found 2,720 reviews mentioning referrals, results, prescriptions, callbacks, chasing, or no response, including 1,703 low-star reviews.
These reviews are often less dramatic in tone than the pure access complaints, but they can be just as damaging. The patient gets through one barrier and then the trail goes cold.
Examples:
"Doctor forgot to do a referral ... These failures has meant that I 'lost' 5 months"
Eileen Garland,Chorlton Family Practice,2 years ago"I have been trying to get a referral since February ... delay my referral for a few months."
R M,Olive Family Practice,2 years ago
The reviews make this feel like one joined-up problem: not knowing whether the practice will actually carry something through.
Clinical trust and safety
This is smaller than access or staff tone, but it is the point where the stakes become harder to dismiss as mere customer-service dissatisfaction.
The refreshed clinical-harm scan flagged 872 low-star reviews, 2.2% of all reviews and 6.4% of low-star reviews, with stronger clinical-failure language. Within that:
57mention misdiagnosis or wrong diagnosis34mention wrong or unsafe medication363mention hospital or urgent escalation190mention severe outcome or condition terms
Examples:
"Misdiagnosed earlier in the year resulting in complications that required further treatment and discomfort"
Lachlan Pollock,The Alexandra Practice,2 years ago"Misdiagnosed for over TWO years because doctors refused face to face appointments and wouldn't listen."
Nicola Skinkis,St. Andrew's House Surgery,2 years ago
This is a major gap between the review corpus and the national survey. Reviews show a part of patient experience that the survey only reaches indirectly, if at all.
Digital access is now a clear corpus layer
The refreshed digital work makes one more thing clear: the online front door is no longer a side note.
The current digital pass finds 2,586 reviews, 6.4% of the whole corpus, with a recognisable website, app, online-form, or named-platform signal. Most of that is still generic website/form/app language rather than product naming, but it is now big enough to rank practices and compare patterns.
The more specific digital appointment pass finds 1,839 appointment-linked digital reviews across 291 practices:
831mainly positive902mainly negative106mixed
That matters because the reviews are not only saying "digital exists". They are saying two very different things:
- when it works, it means same-day access, quick callbacks, and an easier route in
- when it fails, it means another blocked queue, another confusing handoff, or another reason patients feel shut out
What Patients Praise
The positive side of the corpus is just as clear, and it is useful because it shows what good care looks like in ordinary patient language.
Kind, listening, competent care
A refreshed positive pass found 11,880 reviews using language about helpful, caring, kind, listening, reassuring, professional, compassionate, or thorough care. 10,497 of those are high-star reviews.
When patients are happy, they often do not just say "good service". They say someone listened, believed them, explained things, reassured them, or sorted something properly.
Examples:
"Warren was really good he listened to me and believed me when I told him about my back."
Andrea Gregory,Manchester Integrative Medical Practice,4 months ago"Dr Moran was attentive and listened."
Yasmin Warsama,Manchester Integrative Medical Practice,a month ago
Friendly front desk staff still matter a lot
A second positive pass found 5,627 reviews using friendly, welcoming, lovely, or respectful front-desk language. 5,192 of those are high-star reviews.
That is the mirror image of the low-star reception problem. Reception is not a side issue in either direction. It is one of the main ways patients decide whether a practice feels human, usable, and safe.
Good access is noticed when it works
The bigger corpus also makes a useful positive point. Patients absolutely do notice access when it works well.
Examples:
"Managed to get same day appointment at 10:50 and even had blood test same day 11:30."
Sharon Wardle,Pennine Medical Centre,10 months ago"Using online form for appointment easy and obtained same day appointment"
alan ridge,Chorlton Family Practice,8 months ago"Dr. Singh was very helpful with my dermatologist referral"
R.,The Quays Practice,a week ago
That last kind of praise matters. Patients do not only thank warmth. They thank systems and people who actually get something done.
How Patients Write
The reviews do not read like survey responses. They read like people describing what happened to them.
Three features stand out more sharply in the bigger corpus.
They write in chains, not categories
Survey questions split problems into neat boxes: phone contact, website contact, reception helpfulness, preferred clinician, overall experience.
Reviews usually tell a sequence:
- could not get through
- finally got through
- was told nothing was left
- was pushed online
- got no reply
- was spoken to badly
- then had to chase a result, referral, or prescription
That chain is one of the biggest differences between the review corpus and the survey frame.
The language is plain, blunt, and often hard-edged
Patients usually do not soften much. They use direct words like:
- rude
- awful
- appalling
- unhelpful
- disgusting
- useless
That plainness matters. It tells you how people interpret the service, not just what formally happened.
Positive reviews are concrete too
Good reviews are often just as operational as bad ones. They say:
- I was listened to
- they got back to me quickly
- I got seen the same day
- reception were welcoming
- someone sorted the referral or prescription
So the corpus is useful for showing what patients want, not only what they hate.
Some reviews are written for other patients, not just the practice
The refreshed activism/community pass adds another layer that does not show up well in cleaner survey work.
1,625 reviews, 4.0% of the corpus, contain some form of public-warning, regulator-escalation, review-about-review, authority-positioning, or community-framing language.
Most of these are not organised campaigning. They are lone reviewers trying to:
- warn other patients away
- tell people to de-register or complain
- point to the review page as evidence that this is not an isolated problem
- appeal to regulators, MPs, or complaint bodies
That matters because some reviews are written as public signals, not just as private complaints made visible.
Where Reviews Go Beyond The National Patient Survey
The national GP patient survey does ask useful gateway questions. In the current survey set, that includes things like:
- how easy or difficult it is to contact the practice on the phone
- how easy or difficult it is to contact the practice using the website
- how easy or difficult it is to contact the practice using the NHS App
- how helpful the reception and administrative team are
- which online services people have used
- whether there is a preferred healthcare professional and how often patients get to see or speak to them
Those are useful questions. But the review corpus keeps adding things the survey cannot show well.
Reviews show the route, not just the rating
The survey tells you whether contact felt easy. Reviews show which route patients tried, how many times they tried it, where it broke, and what they were told next.
Reviews show emotional cost
The survey records difficulty or dissatisfaction. Reviews show anger, panic, humiliation, exhaustion, fear, and mistrust.
Reviews catch exclusion and drop-off
Survey responses usually come from people who got far enough through the system to answer questions about contact or care. Reviews also include people describing being blocked at the door, bounced between routes, or giving up.
Reviews join the stages together
The survey separates access, support, appointment quality, and overall experience. Reviews often describe them as one continuous failure.
Reviews reach clinical-risk territory
The survey touches listening, time, and confidence. The reviews go further into misdiagnosis, delayed referrals, unsafe medication, hospital escalation, and near-miss language.
Reviews also show what practices say back
The survey does not have a public-reply layer at all. The review corpus does.
The refreshed responses pass found 16,756 public practice responses in the corpus, with a sharp gap between reply rates to praise and criticism. That is useful because it lets the corpus show not only what patients say, but how practices publicly choose to answer, deflect, apologise, or stay silent.
What This Corpus Helps Us Represent
If the point of using this corpus is to represent patient need more clearly than the survey usually can, the main lessons are now fairly plain.
Access is only the start
Access is still the biggest theme, but patients do not stop there. They also talk about respect, clarity, follow-through, prescriptions, referrals, and whether they trust what they were told.
Reception is not a side issue
In the reviews, reception is often the face of the practice. It is where patients feel helped, blocked, believed, doubted, respected, or dismissed.
Patients want kindness and competence together
The positive reviews are not only about warmth. They are about warmth plus practical help. The negative reviews are not only about delay. They are about delay plus confusion, plus poor treatment, plus no confidence that the next step will happen.
Mixed practices need closer reading
Because nearly every practice has both high and low reviews, the real question is often not "is this place good or bad?" It is:
- who gets through
- who gets stuck
- when does it work
- where does it break
- which failures keep repeating
Digital routes now need reading alongside phone and reception
The newer corpus makes this much clearer than before. Access is no longer just a phone-and-reception story. For many patients it is now phone plus website plus app plus form plus callback logic, all joined together.
That means the digital layer is no longer optional context. It is part of the patient route.
Bottom Line
Across the board, this bigger review corpus still says three main things.
First, patient experience is sharply polarised. Many people leave very happy. Many others leave very angry. Very few sit in the middle.
Second, access is the biggest theme, but not the only one. Staff attitude, weak follow-through, digital front-door experience, and clinical trust all keep returning in the review text.
Third, the reviews show patient need in a form the survey usually cannot: direct, event-based, operational, emotionally clear, and sometimes public-facing in the way patients warn each other or push for outside attention.
That is what makes them useful. They are not tidy. They are full of sequence, blame, gratitude, confusion, detail, and sometimes real fear. But that is exactly why they show things that cleaner instruments smooth away.
3. Older Reviews Versus Recent Reviews
Prompt
the next report, we want to explore is what is common in older reviews (since our corpus has quite a lot of entries approaching 10 years old) that never occur in recent/post-pandemic reviews, and vice-versa what's common now, but never used to be complained about in older reviews.
Current file
older-vs-recent-complaints-report.md
Current source basis
- ad hoc era-split analysis over indexed reviews
- low-star complaint-focused comparison
Current discoveries
- Core complaints like appointments, phones, rude reception, and being sent round in circles did not disappear.
- The stronger change is on the recent side: digital front-door complaints, online triage, web forms, named systems, and callback-driven access become much more visible after the pandemic shift.
- The refreshed comparison uses
1,397older low-star reviews (2016-2019) against10,417recent low-star reviews (2022-2026).
Older Reviews Versus Recent Reviews This note asks a simple question:
Older Reviews Versus Recent Reviews
This note asks a simple question:
What looks genuinely older in the complaint corpus, and what looks more recent?
I used the rebuilt review index and split low-star reviews into two complaint-heavy windows:
older: estimated2016to2019recent: estimated2022to2026
I left out 2020 and 2021 as transition years.
That gives:
1,397older1and2star reviews10,417recent1and2star reviews
The dates are still estimates from Google's relative timestamps, not exact publication dates. But they are good enough for a pre-pandemic versus post-pandemic language split.
Headline
The strongest result is still continuity plus layering.
Most of the familiar complaints in the older reviews are still here in the recent ones:
- the
8amrush - not getting through on the phone
- no appointments left
- rude or obstructive reception
What changes more clearly is the route through which patients experience those failures.
The recent reviews add a much thicker layer of complaint language about:
- online forms
- named digital systems like
AskMyGP,PATCHS,eConsult,Accurx, and theNHS App - triage as a gate
- telephone-first or callback-first handling
- difficulty getting face-to-face care as such
So this still does not read like a new problem replacing an old one. It reads like the same old access failures, with extra digital and remote layers added on top.
What Did Not Go Away
The bigger corpus makes this even harder to deny. The core complaints are long-running.
The 8am scramble is still there
59older low-star reviews,4.2%of the older low-star set552recent low-star reviews,5.3%of the recent low-star set
Older example:
"Ring 96 times in a morning at 8am as advised to be told no appointments."
Jessica McClelland,The Bolton Family Practice,7 years ago
Recent example:
"I’m not allowed apparently I need to phone at exactly 8am if I want even a chance"
joshua covus,Albion Medical Practice,a month ago
That is not a vanished pre-pandemic complaint. It is still one of the basic ways patients describe access now.
"No appointments" remains a core complaint
84older low-star reviews,6.0%564recent low-star reviews,5.4%
Older example:
"You can never get an appointment when you need one"
older low-star review,Culcheth Medical Centre,7 years ago
Recent example:
"After saying what l thought in a previous comment, l have now registered at another surgery"
Andrew James,Padgate Medical Centre,Edited 3 days ago
The wording shifts around the edges, but the complaint itself is stubbornly familiar.
Rude or obstructive reception is also persistent
161older low-star reviews,11.5%1,057recent low-star reviews,10.1%
Older example:
"Receptionists are rude and it takes many weeks to get an appointment even if it’s an emergency."
older low-star review,Corkland Road Medical Practice,7 years ago
Recent example:
"The reception staff are very rude and unprofessional."
lisa wiswell,Huntley Mount Medical Centre,a month ago
So the older-versus-recent picture is not a story of rude reception disappearing into slicker systems. It is still part of the same access story.
Older Complaints That Fade Rather Than Vanish
This is where the earlier report mostly still holds up, but the larger corpus makes the wording cleaner.
I could not find many strong older complaint families that truly disappear in the recent era.
What I did find is a set of more physical, on-site, pre-digital complaints that feel relatively more at home in the older reviews, even though they do still show up later.
Walk-in-centre frustration
28older low-star reviews,2.0%114recent low-star reviews,1.1%
Older examples:
"Arrived for walk in centre then told to come back at 1pm then when people arrived at 2pm they said they had no more walk in centre appointments."
Debra Farricker,Hawthorn MC,7 years ago"Waited 3 hours for the walk in centre and still didn't get seen"
Mohamed Zifar,Hawthorn MC,9 years ago
Recent reviews still mention walk-in centres, but more often as somewhere they are sent away to, or bounced back from, rather than as the main on-site queue itself.
Waiting-room and in-building delay complaints
28older low-star reviews,2.0%116recent low-star reviews,1.1%
Older example:
"had an appointment for 10:50am wasn't seen till 11:10am"
older low-star review,Great Lever One,7 years ago
This kind of complaint still exists, but it feels less central than it used to. Older complaints more often sound like patients are already in or near the building. Recent complaints more often happen before they even get that far.
What Is More Clearly Recent
This is where the bigger corpus sharpens the report most.
Named digital gateways are genuinely recent complaint language
0older low-star reviews148recent low-star reviews,1.4%
That is the cleanest recent-only shift in the whole pass.
Examples:
"They do not respond to emails, and AskMyGP is always closed."
Joey Howard-White,Tower Family Healthcare,2 years ago"AskMyGP could be better as it fills up really quickly, usually by 8am."
Alan White,Red Bank Group Practice,a month ago
The old complaint was often "I cannot get through." The newer complaint is often "every route is shut at once."
Online-form gatekeeping becomes a proper complaint family
1older low-star review,0.1%112recent low-star reviews,1.1%
Recent examples:
"Online form must be filled out before 10am which isn't possible for some people, and then tells you to call anyway."
recent low-star review,Chorlton Family Practice,2 weeks ago"Using covid as an excuse about everything ... Online form is restricted opens at 7am."
Syeda Khanum,Donneybrook Medical Centre,4 years ago
This is one of the clearest changes in the corpus. Patients now complain not just about delay, but about being pushed into a system that fills up early, gives no answer, or simply loops them back to the phone.
Triage becomes a thing patients have to get through
4older low-star reviews,0.3%89recent low-star reviews,0.9%
Recent example:
"Wanted to see a doctor ... fill out the triage form hoping i wound get an appointment. But all I’ve got is some cream sent to boots."
recent low-star review,Manor House Surgery,a day ago
That is a different kind of complaint from older front-desk or walk-in frustration. The patient is now talking about a filtering process before care is even properly in reach.
Telephone consultation and remote-first handling rise sharply
10older low-star reviews,0.7%124recent low-star reviews,1.2%
Recent example:
"I waited 40 minutes for a telephone consultation ... No call."
recent low-star review,Market Street Medical Practice,3 weeks ago
This is another strong change in complaint shape. The service is not only hard to reach. It is more often described as remote, delayed, and callback-driven.
Face-to-face care becomes a complaint topic in its own right
5older low-star reviews,0.4%244recent low-star reviews,2.3%
Recent examples:
"Very difficult to make an appointment, expected to be in all day for a phone call, difficult to arrange a face to face appointment"
john clarke,Barlow Medical Centre,3 years ago"Get told to call reception for face to face appointment reception revert us back to online form."
Syeda Khanum,Donneybrook Medical Centre,4 years ago
That does not mean older patients did not want face-to-face care. It means the complaint language changes. In the newer reviews, face-to-face access itself has become something patients feel they have to win.
What The Shift Really Looks Like
The era split is not telling us that the old problems went away and new problems appeared from nowhere.
It looks more like this:
1. The old bottlenecks stay
Patients still complain about:
- phones
- appointments
- being told to try again tomorrow
- rude reception
- going round in circles
2. The newer system adds extra gates
Recent reviews add:
- online form closures
- app-based dead ends
- named digital systems
- triage
- callback and telephone-consultation waiting
- the extra hurdle of getting face-to-face care
3. The point of failure moves further upstream
Older complaints more often sound like:
- I turned up
- I waited
- I was told to come back
- I sat there for hours
Recent complaints more often sound like:
- I filled in the form
- the form was closed
- the app had nothing
- I was waiting for a callback
- I could not get a face-to-face appointment
That is the key change. The struggle now more often happens before the patient reaches the room, the desk, or the clinician.
Bottom Line
The strongest result here is still continuity plus layering.
There are very few older complaint types that truly disappear from the recent reviews. The core access complaints are stubborn and long-running.
What changes in the recent reviews is the route through which patients experience those same failures.
The newer complaint language is much more about:
- digital gatekeeping
- app and form dependency
- triage
- remote-first handling
- the extra step of having to secure face-to-face care
So the newer corpus does not read like a clean replacement story.
It reads like the same old access failures, with a new digital layer added on top.
4. How Patients Talk About Staff And Clinicians
Prompt
we should also do a pretty thorough check of positive reviews to try and identify when specific staff are named, what exactly was written about in particularly glowing terms (or if individuals are called out for failures), like let's write a report about how patients feel about the staff, doctors, etc in practices who actually handle their care, good and bad. We don't want to build a list of good/bad doctors, but it might be useful to look at where named individuals are doing very well compared to the rest of the corpus, but for doing badly, avoid names and stick to issues, complaints, specific mistakes or decisions that patients don't like (i.e. we can name the good doctors, but for bad doctors we should stick to the issues, rather than their names).
Current file
staff-and-clinician-experience-report.md
Current source basis
- ad hoc review mining
- manual extraction of positive named-staff praise
Current discoveries
- Patients still talk about people, not just abstract practice systems.
- The refreshed pass finds very large positive clusters around kind, listening, competent care and friendly front-desk staff, alongside a strong low-star cluster around rude or obstructive staff treatment.
- Repeatedly praised named individuals remain visible on the positive side, while the negative side stays issue-focused rather than naming clinicians.
- Reception remains one of the clearest swing points between high-star and low-star experience.
How Patients Talk About Staff And Clinicians This note looks at the people who actually carry the patient experience in the review corpus:
How Patients Talk About Staff And Clinicians
This note looks at the people who actually carry the patient experience in the review corpus:
- reception and admin staff
- nurses
- doctors and other clinicians
- named individuals who are publicly praised again and again
It is based on the rebuilt indexed corpus of 40,506 Google reviews.
This is still not a league table of good and bad staff.
For the positive side, I do name individuals where the public review corpus shows repeated praise by name. That is useful because it shows what patients value when care feels genuinely good.
For the negative side, I do not build a named list. I stick to the problems patients describe: tone, handling, judgement, missed explanations, and the kinds of decisions that make people feel shut out or badly cared for.
The Big Picture
Across this corpus, patients very often judge the whole practice through one or two human encounters.
That might be:
- the receptionist who either makes the route easier or harder
- the nurse who explains what is happening
- the GP who listens properly, or does not
- the admin person who actually sorts the referral, prescription, or follow-up
So even in a corpus full of access and systems complaints, people still matter a lot.
What Patients Praise Most
Clinicians who listen, explain, and reassure
This is still the strongest positive people-theme in the corpus.
A refreshed pass found 10,497 high-star reviews using language about helpful, caring, kind, listening, reassuring, professional, thorough, or compassionate care.
That is a very large signal.
When patients are happy, they usually do not praise medicine in abstract terms. They praise:
- being listened to
- not being rushed
- clear explanations
- feeling safe
- being taken seriously
Recent examples:
"Dr. Singh was very helpful with my dermatologist referral"
R.,The Quays Practice,a week ago"Dr Moran was attentive and listened."
Yasmin Warsama,Manchester Integrative Medical Practice,a month ago"Warren was really good he listened to me and believed me"
Andrea Gregory,Manchester Integrative Medical Practice,4 months ago
Friendly front-desk staff still matter enormously
A separate positive pass found 6,054 high-star reviews using friendly, welcoming, lovely, polite, or respectful front-desk language.
That is the mirror image of one of the biggest complaint themes. Reception is not a side issue. In the reviews, it often decides whether the practice feels humane or hostile.
Recent examples:
"Both visits have been a pleasure, low wait times, experienced and friendly staff at all levels"
Mark Schofield,The Quays Practice,a day ago"Lesley was lovely. Service was fast and friendly"
Loredana Dalli,The Quays Practice,2 days ago
Warmth matters most when it comes with practical help
The reviews also show a more specific kind of praise: patients especially value people who actually sort something.
A refreshed pass found 974 high-star reviews with clearer problem-solving language.
Patients praise staff who:
- get the appointment sorted
- chase the issue
- fix a prescription problem
- call back quickly
- explain the next step clearly
Examples:
"the surgery sorted out an emergency prescription on the same day"
Joanne Guy,Woodbank Surgery,5 days ago"The reception managed to book me an urgent appointment for the next day."
Lyndsey Mc Dowell,The Range Medical Centre,5 days ago
This is important. In the positive reviews, patients do not only reward niceness. They reward niceness plus competence.
Named Positive Praise
This section is not a ranking. It is a record of where the enlarged corpus still shows unusually strong, repeated public praise for named individuals in positive reviews.
The cleanest current clusters include:
Dr Riaz, Bredbury Medical Centre
47 high-star review mentions, average rating 5.0.
What patients praise:
- patience
- clear explanations
- careful listening
- fast follow-up
Examples:
"He spoke very patiently and explained everything clearly. He carefully listened to my concerns"
recent review,3 weeks ago"they never let us down ... all the staff from receptionist to..."
recent review,2 months ago
Dr Alam, Norden Branch Surgery
36 high-star review mentions, average rating 5.0.
What patients praise:
- support
- knowledge
- consistency during recent service changes
Example:
"I have received great support especially from Dr Alam who is very supportive and knowledgable."
recent review,3 weeks ago
Dr Imran, Delamere Medical Practice
32 high-star review mentions, average rating 5.0.
What patients praise:
- empathy
- thoroughness
- follow-up
Example:
"Every time he has been caring and professional listening to me, showing genuine empathy then offering a good solution"
recent review,5 days ago
Dr Suchit, The Range Medical Centre
32 high-star review mentions, average rating 5.0.
What patients praise:
- listening
- being thorough
- thoughtful explanations
- not rushing
Examples:
"Dr Suchit is knowledgeable, professional and thoughtful."
recent review,Edited a week ago"Dr Suchit was a very good listener and thorough during my appointment."
recent review,2 months ago
Other strong repeated doctor praise
The enlarged corpus still shows concentrated positive-name clusters for:
Dr Johnson,Millgate Healthcare Partnership:28mentions, average4.93Dr Arif,The Range Medical Centre:18mentions, average4.94Dr Pigney,LADYBARN GROUP PRACTICE:14mentions, average5.0Dr Ibrahim,Harwood Medical Centre:14mentions, average4.93Dr Lam,Padgate Medical Centre:13mentions, average5.0Dr Dodd,Holes Lane Medical Ltd.:11mentions, average5.0
These clusters do not all mean the same thing, but the praise language is strikingly similar: listened, kind, thorough, reassuring, took time, explained clearly.
Named nurse and reception praise exists too
The named non-doctor praise is less concentrated, but it is there, and it matters.
Examples:
"A big thank you to Debbie ... She is very sweet, gentle and professional."
recent review,ROCHDALE ROAD MEDICAL CENTRE,a month ago"Nurse Debbie is one of the most kind and caring people in this practice."
recent review,ROCHDALE ROAD MEDICAL CENTRE,a month ago"Spoke to leanne on reception this afternoon ... answered my concerns and made me feel so much at ease."
James Molloy,The Poplars Medical Practice,a month ago"Leanne was very polite, professional and helpful"
Kathryn Ashworth,Heywood Health,a year ago"the reception team has been incredibly welcoming and helpful, especially Nabila"
HDS Driving School,The Whitswood Practice,11 months ago
That is useful because it shows patients do notice specific admin and nursing staff when they reduce stress instead of adding to it.
What Patients Criticise Most
Rude, dismissive, or hostile handling
This is still one of the strongest complaint themes in the whole corpus.
A refreshed pass found 3,249 low-star reviews in this bucket.
Patients complain about:
- rude reception
- being spoken over
- impatience
- hostility
- being treated like a nuisance
Recent examples:
"If you like long waits for an appointment ... and unhelpful reception staff. Really poor."
Cottage Lane Surgery,Edited 2 weeks ago"Reception team is awful. Rude. Unorganised."
Charlestown MD,3 weeks ago
This is not just a manners complaint. In a lot of reviews, rude handling is part of how patients experience being blocked from care.
Not being listened to, or being brushed off
A refreshed pass found 479 low-star reviews in this bucket.
That is still probably an underestimate, because patients often describe this in many different ways.
What they object to is clear:
- feeling rushed
- not being believed
- concerns being waved away
- poor explanation
Example:
"I was refused and then sent a requirement for a medical review when there was no need"
Knowsley Medical Centre,2 weeks ago
This is one of the places where the reviews are especially useful. The survey can ask if someone listened. The reviews show what it feels like when patients think they did not.
Clinical judgement and safety concerns
A narrower pass found 269 low-star reviews with more direct clinical-judgement or safety language in the staff-and-clinician context.
Examples:
"Worst GP ... allergic reaction ... First they tried to avoid us and referring us to pharmacist"
Church View Medical Centre,3 weeks ago"Misdiagnosed for over TWO years because doctors refused face to face appointments and wouldn't listen."
St. Andrew's House Surgery,2 years ago
Here the issue is not just bedside manner. It is whether patients trust the judgement, follow-through, and safety of what happened.
Decisions patients describe as rigid, unfair, or careless
This bucket is smaller in the stricter pass, but it still matters because it keeps turning up in the narrative of negative reviews.
Patients object to:
- being told to go online when they cannot
- being told to call back tomorrow
- being left waiting for callbacks or results
- repeat prescriptions becoming a fight
- staff appearing to defend the system rather than solve the problem
These are not only delay complaints. They are judgement complaints.
What Good Care Looks Like In Patient Language
If you strip the report back to what patients seem to want from the people handling their care, the pattern is plain enough.
Patients value staff and clinicians who:
- listen properly
- explain clearly
- do not rush
- are kind without being vague
- solve practical problems
- make people feel safe
- treat them like a person, not an inconvenience
Patients dislike staff and clinicians who:
- sound rude or defensive
- appear impatient
- do not explain what is happening
- seem not to believe them
- make them chase the same thing repeatedly
- leave them stuck between systems
That sounds obvious, but the review corpus makes it much more concrete than a survey score does.
What This Means
Two things stand out more clearly in the enlarged corpus.
People matter as much as process
The wider review work already shows how important access systems are. But inside those systems, patients still experience care through human beings.
A bad appointment route plus a kind receptionist can still leave someone feeling helped.
A working appointment route plus a dismissive clinician can still leave someone furious.
Patients reward kindness and competence together
The strongest positive named praise is not just nice doctor praise.
It is praise for staff who are:
- kind
- listening
- thorough
- reassuring
- effective
And the strongest complaints are not just long-wait complaints.
They are complaints where delay is mixed with disrespect, weak judgement, poor explanation, or failure to carry something through.
Bottom Line
Across this corpus, patients do not talk about staff and clinicians in a vague way.
They are very clear about what good care feels like:
- listened to
- reassured
- helped
- taken seriously
- sorted out
And they are very clear about what bad care feels like:
- rude
- dismissive
- rushed
- unsafe
- unreliable
The positive-name clusters suggest that some individual clinicians and staff leave a very strong good impression in public reviews. The negative side suggests that when patients are upset, the biggest issue is usually not one bad personality in isolation. It is the mix of tone, judgement, and system handling that makes them feel shut out or badly cared for.
5. Clinical Harm Warning Signs In Google Reviews
Prompt
since misdiagnoses are so important, can we do a pretty thorough check for a new report on warning signs, red flags or direct mentions of clear clinical failures and connected outcomes, avoiding admin, day to day process, but looking for specifically clinical harm issues reported via reviews, again not to witch-hunt but to flag practices for checks that the patient survey don't even try to check for (there are literally no questions about clinical care and whether patients feel healthier after their process)
Extension prompt
whilour last major report, we want to thoroughly dig up every sign where a patient clearly either got sicker, postponed or delayed care/attention, were actvely made sicker or directly harmed, not just through clinical care or process issues, but through any other issues that might not initially look like they made people sicker, but this is the more serious end (these are logged accusations essentially) and I want to know across our corpus how serious are these errors for real patient health. While the previous report was about practice workflow, this should be about real patients and real outcomes reported in their reviews, not practice decisions but basically a shitlist of every unquestionable fuckup we can find that siginifcantly harmed a patient. This is probably an extension to the previous report, rather than a new document.
Current file
clinical-harm-warning-signs-report.md
Current source basis
clinical_harm_signals.py
Current discoveries
- The refreshed run flags
872low-star reviews with stronger clinical-failure or harm language. - That is
2.2%of all reviews and6.4%of all low-star reviews. - Hospital escalation, severe outcome language, dismissal with deterioration, misdiagnosis, and unsafe medication all recur.
- The report positions this as a part of patient experience that the national patient survey barely reaches.
Clinical Harm Warning Signs In Google Reviews This report looks for reviews that sound like possible clinical harm, not just bad service.
Clinical Harm Warning Signs In Google Reviews
This report looks for reviews that sound like possible clinical harm, not just bad service.
The aim is still not to prove wrongdoing from reviews alone. It is to flag where the enlarged review corpus contains warning signs that deserve checking in notes, complaints, SEA work, prescribing audit, and local quality review.
This is also still one of the clearest gaps in the national patient survey. Patients are asked about access, confidence, and experience, but not plainly whether they felt misdiagnosed, got the wrong treatment, became more unwell after contact, or ended up in hospital after being dismissed.
What I counted
I took a cautious pass through the rebuilt indexed review corpus and focused on 1 and 2 star reviews only.
I stripped practice-response text where possible, then flagged reviews with one or more of these markers:
- direct mention of misdiagnosis or wrong diagnosis
- direct mention of wrong or unsafe medication
- negligence, dangerous, or unsafe language
- being ignored, dismissed, or not listened to when that was linked to worsening or escalation
- ending up in hospital, A&E, or emergency care in a clinically connected way
- severe outcome terms like sepsis, pneumonia, cancer, meningitis, appendicitis, stroke, heart attack, or "nearly died"
These categories overlap. One review can hit several at once.
Headline Findings
In the rebuilt 40,506-review corpus:
872low-star reviews were flagged as clinical-harm warning signals- that is
2.2%of all reviews - it is
6.4%of all13,615low-star reviews
Category counts inside that flagged set:
57mention misdiagnosis or wrong diagnosis34mention wrong or unsafe medication236use negligence, dangerous, or unsafe language159link dismissal or not being listened to with a bad outcome363describe hospital or urgent escalation in a clinically connected way190mention a severe condition or near-miss term
This is not the main story in the corpus. But it is far too much to write off as noise, especially because these are not ordinary complaints about phones, queues, or rude reception. These are the reviews where patients say care itself may have gone badly wrong.
Extension: Reported Harm To Real Patients
The warning-sign pass above is about possible failure in care. The more serious extension question is outcome:
When reviewers say something went badly wrong, how often do they also say a real person got sicker, waited too long, missed needed care, ended up in hospital, or faced a serious health consequence?
Again, these are reported accusations in reviews, not proven findings. But they matter because they describe harm in patient terms rather than service terms.
Outcome-led findings
143flagged reviews describe delayed or postponed care in a way the reviewer links to harm36explicitly say the patient got worse, deteriorated, or became more unwell34say treatment, prescribing, or missed treatment actively made the patient more ill366describe hospital, A&E, ambulance, or emergency escalation in clinically connected situations178mention a serious condition, near miss, or life-threatening consequence
These buckets overlap heavily. The same review may describe delay, then deterioration, then hospital admission.
The Main Warning-Sign Themes
1. Misdiagnosis and wrong diagnosis
I found 57 flagged reviews with direct misdiagnosis or wrong-diagnosis language.
These reviews are often short and blunt. Patients do not usually write in careful clinical terms. They write things like:
- misdiagnosed me twice
- refused face to face and would not listen
- kept saying it was something minor
- later it turned out to be something serious
Examples:
"Misdiagnosed earlier in the year resulting in complications that required further treatment and discomfort"
Lachlan Pollock,The Alexandra Practice,2 years ago"Misdiagnosed for over TWO years because doctors refused face to face appointments and wouldn't listen."
Nicola Skinkis,St. Andrew's House Surgery,2 years ago"Be careful with this practice, misdiagnosed my uncle after months of going back and forth."
Ste Minator,RADCLIFFE MEDICAL PRACTICE,3 years ago
This is one of the clearest survey gaps. Survey tools can ask whether a patient had confidence in the clinician. They do not ask whether the patient later found out the diagnosis was wrong.
2. Wrong medication, wrong dose, or unsafe prescribing
I found 34 flagged reviews with direct wrong-medication or unsafe-medication language.
This is a smaller theme than access, but it is one of the sharpest.
Examples:
"Ended up in hospital and was told by consultant that my esophagus was damaged because of wrong medication."
Logic Errors,HEALEY SURGERY,6 years ago"gave her the wrong medication and yet again she is now in hospital with sepsis"
Kerrie,St Andrews Medical Centre,6 years ago"constantly giving wrong diagnosis and medication which has lead to multiple ae trips that could have been prevented"
Gym Bruh,The Park Medical Centre,a year ago
This is worth separating from routine prescription-delay complaints. Delays are common admin problems. Wrong medicine, wrong dose, or medicine linked to bodily harm are different.
3. Dismissal, not being listened to, then something bad happening
I found 159 flagged reviews where dismissal language was tied to a harmful outcome or clinically serious consequence.
This is one of the strongest patterns in the corpus. Patients often say the main problem was not only access. It was that once they did get through, they felt waved away, told it was nothing, or pushed elsewhere without real assessment.
Examples:
"My daughter ended up in hospital with sepsis because we weren't being listened to."
Jill Bamber,The Gill Medical Practice,2 years ago"Doctors DO NOT listen to you or follow notes from previous doctors."
Heather Hayes,Gorton Medical Centre,a year ago"Misdiagnosed for over TWO years because doctors refused face to face appointments and wouldn't listen."
Nicola Skinkis,St. Andrew's House Surgery,2 years ago
This is why the review corpus matters here. The patient is not just saying "I felt brushed off." They are saying "I felt brushed off and then something bad followed."
4. Hospital, A&E, or emergency escalation after GP contact
This is the biggest outcome signal in the whole pass.
I found 363 flagged reviews with hospital or urgent-escalation language, plus 190 with severe condition or near-miss language.
Examples:
"ended up collapsing at home and rushed to hospital"
M3RITz UK,St Andrews Medical Centre,3 years ago"ended up in hospital for a week with heart failure"
Gk Saynomore,Ashton Medical Group,6 days ago"I ended up in hospital because I just couldn't reach anyone to get antibiotics for a simple infection."
Lija Harper,Wilmslow Road Surgery,3 years ago"Took my 3yr old and they missed an infection and ended up in hospital."
Laura Bath,Chorlton Family Practice,8 years ago
Even allowing for exaggeration in some reviews, this is the part of the corpus that is hardest to dismiss as ordinary frustration. The language is repeatedly about emergency rescue, serious illness, and consequences patients understood as threats to life or long-term health.
Reported Harm To Real Patients
Delayed care linked to harm
The current pass found 143 reviews where the complaint was not just "it took too long", but "it took too long and harm followed".
This is where admin and clinical risk blur together. A missed callback, delayed antibiotic, weeks waiting for medication, or repeated failure to arrange review may begin as a process issue. In the reviews, patients describe that delay as part of the route to becoming more unwell.
Reviewers saying they got worse
The stricter worsening bucket found 36 reviews explicitly using "got worse", "deteriorated", or equivalent language.
That is still a tight count. Many more reviews imply worsening without using the exact words.
Reviewers saying care or treatment made them more ill
I found 34 reviews where patients directly linked treatment, prescribing, or failure to provide the right treatment with becoming more ill.
That is one of the clearest high-risk themes in the corpus because the reviewer is not only unhappy. They are saying the intervention itself, or the failure to intervene properly, caused bodily harm.
Repeated Practice Signals
The fairest way to use this material is not to single out one dramatic quote. It is to look for repeated warning-signal reviews across different patients.
Among the strongest repeated current clusters in the rebuilt index are:
| Practice | Flagged signal reviews | Share of all reviews |
|---|---|---|
| Ashton Medical Group | 16 | 1.6% |
| The Robert Darbishire Practice | 14 | 5.8% |
| Chorlton Family Practice | 12 | 1.3% |
| Florence House Medical Practice | 10 | 8.3% |
| Hawthorn MC | 10 | 3.9% |
| Droylsden Road Surgery Branch | 9 | 12.7% |
| Cheetham Hill Medical Centre | 9 | 2.8% |
| Lime Square Medical Centre | 8 | 6.6% |
| Northenden Group Practice | 8 | 6.2% |
| Salford Primary Care Together - Little Hulton | 7 | 5.7% |
| Rock Healthcare Limited | 7 | 5.3% |
| STONEHILL MEDICAL CENTRE | 7 | 5.0% |
This should not be read as a league table of unsafe practices. It is a short list of where the review corpus is giving repeated enough warning signals that a human check looks justified.
What Makes These Reviews Different From Ordinary Complaints
Most bad reviews in the dataset are still about access, rude interactions, repeat prescriptions, or call queues.
The warning-sign subset feels different in both tone and content:
- patients talk about getting worse, not just waiting longer
- patients name clinical consequences such as sepsis, appendicitis, infection, heart failure, cancer, heavy bleeding, or hospital admission
- patients often contrast the GP interaction with what happened later in A&E, hospital, private care, or after seeing another clinician
- some reviews explicitly say a serious condition was missed, the wrong medication was given, or the problem was wrongly waved away
In other words, these reviews get much closer to the question: did the care help, or did the patient feel less safe after it?
What To Check Next
If the point is early warning rather than blame, the best next checks are still:
- review a sample of flagged cases practice by practice, especially where there are repeated signals over time
- separate pure access-to-A&E diversion from cases where the reviewer also describes missed symptoms, wrong treatment, or worsening illness
- audit repeated themes around children, infections, chest pain, heavy bleeding, cancer suspicion, and medication safety
- review safety-netting language in triage and remote consultations, especially where patients say they were dismissed or told it was anxiety
- compare these signals with complaint files, SEA records, prescribing incidents, and any hospital feedback already held locally
Bottom Line
The enlarged review corpus still contains a meaningful clinical-harm warning-sign layer.
It is not the dominant story in the data, but it is too large to ignore: 872 low-star reviews, 2.2% of the whole corpus and 6.4% of all low-star reviews. The sharpest signals are not about courtesy or convenience. They are about patients saying the diagnosis was wrong, the medication was wrong, the problem was brushed off, or the real outcome only became clear once they got sicker or reached hospital.
That is exactly the kind of thing a patient survey usually does not even ask.
6. Practice Responses To Reviews
Prompt
and I suppose we need a report about practice responses, distinguishing between repsonses to positiv reviews or responses to negative reviews. Patient blaming is partcualrly prevalent and I want to know the various ways in which patient-blaming langage appears in responses from practices to patients, and if you can figure it out, which practices reply promptly to reviews with genuinely useful responses that are not just "contact the front desk" or like "use the website", which is very common. Who does good responses and who does bad ones, and what are the charcateristics of those groups?
Current file
practice-responses-report.md
Current source basis
analyze_practice_responses.py
Current discoveries
- The refreshed pass finds
16,756public practice responses in the corpus. - Practices are still much more likely to reply to praise than criticism.
- Most responses remain thanks, apology, or boilerplate signposting rather than public evidence of specific action.
- Publicly useful responses are rare, and the report still pulls out patient-blaming and deflecting modes in reply language.
Practice Responses To Reviews This report looks at how practices reply to Google reviews, and how those replies differ between praise and criticism.
Practice Responses To Reviews
This report looks at how practices reply to Google reviews, and how those replies differ between praise and criticism.
It is based on a refreshed rule-based pass over the rebuilt local index. I split review text from Practice response date: and Practice response: where present, then looked at:
- whether a response was attached at all
- whether the original review was positive, negative, or mixed
- whether the response was mainly thanks, apology, boilerplate signposting, or something more specific
- whether the response used patient-blaming or deflecting language
- whether the response appeared quick or delayed, using relative review and response dates where that could be compared
This is not NLP and it is not a perfect legal reading of tone. But it is enough to show the main response patterns in the enlarged corpus.
Headline
Practice responses are common, but still very uneven.
In the rebuilt 40,506-review corpus:
16,756reviews include a practice response- that is
41.4%of all reviews 50.8%of positive reviews got a response- only
23.8%of negative reviews got a response 33.0%of mixed reviews got a response
So practices are still much more likely to answer praise than criticism.
That remains one of the clearest findings in the whole response layer.
What Most Responses Look Like
The response layer is still dominated by thanks and polite formulae, not by detailed public explanation.
Across all responses, the refreshed scan found:
13,237with thanks or praise language2,400with apology language1,682with boilerplate signposting like "please contact the surgery", "speak to the practice manager", "use the website", or "fill in the form"- only
141with clearer specific-action language 66with stricter patient-blaming or deflecting markers54with privacy-defence language
That last number matters, but direct blame is still not the main response problem if you define it narrowly. The bigger issue is softer deflection:
- apology plus private contact
- apology plus signposting
- apology plus "use the online route"
- apology plus "high demand"
So the bad response style is usually not openly hostile. It is polite, managerial, and empty.
Positive Reviews Versus Negative Reviews
Positive review responses
Some practices answer virtually every positive review:
| Practice | Positive response rate | Positive reviews responded to |
|---|---|---|
LADYBARN GROUP PRACTICE |
100.0% |
265 |
The Sides Medical Practice |
100.0% |
257 |
The Arch Medical Practice |
100.0% |
114 |
Cornbrook Medical Practice |
100.0% |
72 |
Whitley Road Medical Centre |
100.0% |
60 |
Culcheth Medical Centre |
100.0% |
44 |
Bredbury Medical Centre |
99.7% |
380 |
Peterloo Medical Centre |
99.4% |
155 |
The good side of this is obvious: some practices are very present in public.
The weak side is that many positive responses are still very thin:
- thank you
- glad you had a positive experience
- thanks for the stars
- we will pass this on to the team
That is not necessarily bad. It is just not very informative.
Negative review responses
Negative review responses are still much rarer, and much more likely to be defensive or generic.
Practices with especially high negative-response coverage now include:
| Practice | Negative response rate | Negative reviews responded to |
|---|---|---|
The Arch Medical Practice |
100.0% |
113 |
Peterloo Medical Centre |
100.0% |
71 |
Littletown Family Med Pract |
100.0% |
34 |
Whitley Road Medical Centre |
100.0% |
27 |
The Sides Medical Practice |
100.0% |
26 |
LADYBARN GROUP PRACTICE |
98.0% |
97 |
St Andrews Medical Centre |
96.5% |
82 |
The Range Medical Centre |
89.4% |
84 |
But high negative-response coverage still does not mean high-quality response.
In practice, the negative replies still mostly split into four types:
- apology plus "contact us privately"
- apology plus "use the website/form/front desk"
- apology plus defence of capacity, policy, or process
- rarer, genuinely useful explanation of what changed
Patient-Blaming Language
Direct patient-blaming still appears in a minority of responses, but softer blame and route-defence appear much more often.
The stricter blame-mode counts in the refreshed scan were:
35capacity-defence responses11eligibility or policy responses10wrong-route or process responses8attendance or lateness responses2records-based denials
What patient-blaming looks like here
It usually does not read like "this is your fault". It reads more like:
- your problem is the policy
- your problem is the route you used
- you should have used the online form
- we are under pressure like the rest of the NHS
- our records do not support your version
- the fact you were seen the next day means it was safe
That still matters, because it shifts the centre of gravity away from the patient account and back onto rules, systems, or the patient’s own behaviour.
Where it shows up most
The strongest negative patient-blaming counts in the current pass were:
The Robert Darbishire Practice:5West Point Medical Centre:5Cheadle Medical Practice:3Millgate Healthcare Partnership:3Barlow Medical Centre:2New Islington Medical Centre:2
Example patterns:
The Robert Darbishire Practice:
"The fact that your appointment could be scheduled for the next day indicates your condition was stable and it was safe to do so."
West Point Medical Centre:
"We have received a lot of positive feedback recently about our team ... NHS services can experience longer waiting times because of high demand."
The Brooke Surgery:
"If this situation ever arises again, please use our online service ..."
That last example is not overt blame, but it is a classic soft-deflection move: the complaint is turned back into instructions for the patient.
Boilerplate And Signposting
This is still the most common failure mode in negative replies.
The refreshed scan found 1,682 responses with boilerplate signposting language.
That includes replies such as:
- please contact the surgery
- ask to speak with the practice manager
- use the website
- fill in the feedback form
- follow the complaints process
These replies can sound serious, but they usually do not answer the public complaint in any real way.
Practices where boilerplate is especially strong
On the current response text, these practices stand out for high-volume but generic negative replies:
| Practice | Negative responses | Negative boilerplate replies | Bad negative replies |
|---|---|---|---|
Dickenson Road Medical Centre |
103 |
84 |
84 |
Shanti Medical Centre |
82 |
46 |
46 |
The Arch Medical Practice |
113 |
43 |
43 |
Salford Primary Care Together - Little Hulton |
51 |
40 |
40 |
Cheetham Hill Medical Centre |
73 |
28 |
28 |
Bolton Medical Centre |
66 |
27 |
27 |
The Bolton Family Practice |
66 |
27 |
27 |
Droylsden Medical Practice |
27 |
26 |
26 |
What these have in common is not silence. It is response without resolution.
Who Does Better Responses
Truly useful negative responses are still rare, but they do exist.
The current pass found the strongest negative specific-action signals at:
The Quays Practice:3The Arch Medical Practice:2Cherry Medical Practice:2Cheadle Medical Practice:2Ailsa Craig Medical Centre:2The Chowdhury Practice:2Chorlton Family Practice:2
That is still small compared with the total response layer.
The useful pattern is simple. Better replies tend to:
- acknowledge the complaint
- say what changed
- name a process or communication change
- avoid simply telling the patient to re-enter the same failed route
The older best-case examples still fit the refreshed corpus:
Chorlton Family Practicegiving public change-language about appointment and telephone systemsThe Sides Medical Practicedescribing process review rather than pure signpostingThe Arch Medical Practicesometimes giving long, concrete explanations rather than just a management template
Who Does Worse Responses
The weaker group is easier to describe.
Prompt but generic
Some practices reply fast, including to negative reviews, but mostly with managerial templates rather than useful public substance.
Shanti Medical Centre is a good example of this pattern in the enlarged corpus: high coverage, quick turnaround, but heavy repetition of the same "please contact the assistant practice manager" wording.
High-volume but delayed
Some practices answer huge numbers of reviews, but often much later:
| Practice | Approx average response delay |
|---|---|
Peterloo Medical Centre |
18.4 months |
The Birches Medical Centre |
14.1 months |
Conway Road Medical Practice |
12.8 months |
BARRINGTON MEDICAL CENTRE |
12.7 months |
Whitley Road Medical Centre |
12.5 months |
Dickenson Road Medical Centre |
11.1 months |
The Range Medical Centre |
8.5 months |
This delay estimate is rough because it is based on relative date strings, but it is still enough to show the difference between fast-turnaround responders and later catch-up responders.
Replies to praise much more than criticism
Some practices still show very large positive-versus-negative response gaps:
| Practice | Positive response rate | Negative response rate | Gap |
|---|---|---|---|
Conway Road Medical Practice |
97.7% |
14.8% |
82.9 points |
Norden Branch Surgery |
89.4% |
12.5% |
76.9 points |
Padgate Medical Centre |
79.7% |
8.8% |
70.9 points |
Denton Medical Practice |
84.1% |
20.4% |
63.7 points |
Middleton Health Centre |
85.6% |
22.9% |
62.7 points |
The Park Medical Centre |
67.2% |
10.6% |
56.6 points |
This kind of gap matters because it suggests some practices are using review replies more as reputation management than as a balanced public conversation.
Characteristics Of The Better Group
The better responses tend to have these features:
- they reply to negative reviews as well as positive ones
- they say what changed, not just who to contact
- they name a process, system, or communication change
- they do not tell the patient to simply re-enter the same route that already failed
- they do not lean too heavily on high demand as the main answer
- they sound written by a person rather than dropped in from a template
Characteristics Of The Worse Group
The weaker responses tend to have these features:
- very high use of apology-plus-signposting
- lots of "please contact the surgery" with no public substance
- website, form, or complaints-process redirection
- capacity-defence language like "high demand" or "NHS pressures"
- occasional public correction of the patient account or reinterpretation of urgency
- much stronger engagement with praise than with criticism
Bottom Line
The response layer is active, but still not especially accountable.
Practices are much more likely to answer positive reviews than negative ones. Where they do answer criticism, the common pattern is still not abuse but polite deflection: apology, private contact, website form, complaints process, or explanation of pressure.
Direct patient-blaming exists, but the bigger issue is softer blame and route-defence. The public message often becomes: use the correct channel, understand the pressure we are under, contact us privately, and we will look into it.
Truly useful public responses are still rare. The better ones explain what changed. Most still do not.
7. Online, Website, And Software Platform Experience
Prompt
good work. Another report, I need an in-depth exploration of the online/web experience and the various software platforms either metnioned or inferred from the platform, starting with anlysing generic coverage of things like "the website" or "the site" or "online" etc. I know patients mention PATCHs quite a lot, but accurx, econsult, and maybe a few others do turn up, tho usually not by name because the practice usually doesn't expose the name. It's hard to make real like-for-like comparisons because of no tags, but try to get a sense for the quality and distribution of issues specific to the website and software, good and bad.
Current files
online-web-platform-experience-report.md- later follow-on work:
digital-appointment-practice-ranking-report.mddigital-platform-allocation-report.md
Current source basis
analyze_digital_experience.pyrank_digital_appointment_practices.pyinfer_digital_platforms_by_practice.py
Current discoveries
- Digital access is now a substantial theme:
2,586reviews,6.4%of the corpus, carry a recognisable digital/web/app/platform signal. - Generic website/form/app language remains far more common than explicit product naming.
- The appointment-linked ranking pass finds
1,839digitally relevant appointment reviews across291practices, split831positive,902negative, and106mixed. - The allocation pass covers
295digitally signalled practices, names at least one platform at121, and leaves174still generic or unknown from review wording alone.
Online, Website, And Software Platform Experience In The Review Corpus
This report looks at the digital front door in the review corpus: websites, online forms, apps, and named systems such as PATCHS, AskMyGP, eConsult, Accurx, and the NHS App.
Online, Website, And Software Platform Experience In The Review Corpus
This report looks at the digital front door in the review corpus: websites, online forms, apps, and named systems such as PATCHS, AskMyGP, eConsult, Accurx, and the NHS App.
The main difficulty is still the same one: patients often do not know or use the software brand name. They say "the website", "the online form", "the app", or "the system". So the right way to read this is still broad first, named tools second.
This pass uses the rebuilt review index, strips practice-response text where possible, and stays focused on what patients themselves wrote.
Headline
The digital layer is now a real part of the patient experience, not a side issue.
In the rebuilt 40,506-review corpus:
2,586reviews,6.4%of all reviews, mention a website, online route, app, named platform, or a recognisable digital-platform issue1,803mention the generic online or website layer- only
382explicitly name one of the main platforms or apps in this pass (PATCHS,AskMyGP,eConsult,Accurx,NHS App)
That still means most of the digital experience is not being described by product name. Patients mostly talk about a web route they have to use, not the vendor behind it.
Generic Coverage First: "Online", "Website", "The Form"
The generic layer is still much bigger than any named platform.
The broad online / website / online form / online booking / online triage bucket now contains:
1,803reviews total903negative reviews847positive reviews53mixed reviews
So the digital front door is not uniformly bad. It has a real positive side when it works. But negative experiences still outweigh positive ones.
Practices with especially visible digital-review volumes include:
| Practice | Digital reviews | Share of all reviews | Negative | Positive |
|---|---|---|---|---|
The Brooke Surgery |
118 |
9.4% |
14 |
103 |
Ashton Medical Group |
110 |
11.2% |
49 |
49 |
Millgate Healthcare Partnership |
106 |
10.1% |
27 |
77 |
Chorlton Family Practice |
93 |
10.4% |
20 |
63 |
Cheadle Medical Practice |
53 |
11.0% |
15 |
35 |
The Sides Medical Practice |
37 |
13.0% |
3 |
34 |
Ashville Surgery |
36 |
14.0% |
5 |
29 |
LADYBARN GROUP PRACTICE |
35 |
9.5% |
6 |
28 |
This is still the main digital pattern in the corpus: the same kind of online route can be described as quick, modern, and convenient in one practice, and as another barrier in another.
Named Platforms: Visible, But Still Under-Named
PATCHS
Explicit PATCHS mentions are still present but not huge:
58reviews total21negative32positive5mixed
The balance is genuinely mixed.
Good PATCHS reviews usually talk about speed and efficiency:
"Having used the PATCHS system several times now, I would like to say how quick, easy and efficient the system has been."
Norden Branch Surgery,2 months ago
Bad PATCHS reviews usually talk about visibility, access, or exclusion:
"No record of submission using PATCHs."
Chorlton Family Practice,6 months ago"Receptionist rude insisting father has to use patchs when he’s no good on a smartphone or computer."
Family Surgery,3 months ago
So PATCHS still reads like a high-variance system. Where the workflow behind it works, patients praise it. Where the surrounding setup is weak, it becomes another locked gate.
AskMyGP
AskMyGP remains the most visible named patient platform:
140reviews total57negative80positive3mixed
That is still split, but now slightly more positive than negative.
Good AskMyGP reviews usually say:
- same-day response
- quick advice
- no need to fight the phones
- easy to use when backed by real follow-through
Example:
"do ask my gp on line, always get a response on same day"
The Brooke Surgery,Edited 2 years ago
Bad AskMyGP reviews still say:
- it is always closed
- it fills too quickly
- requests get closed off rather than acted on
- phone messages point patients to a route that is not really open
Examples:
"They do not respond to emails, and AskMyGP is always closed."
Tower Family Healthcare,2 years ago"My requests on AskmyGP just get closed off with a link to a pharmacy."
Ashville Surgery,a month ago
So AskMyGP still looks highly practice-dependent. It can feel fast and effective when the practice is staffed to use it properly. It feels much worse where it becomes a shut door or a dead end.
eConsult
eConsult is still lightly named directly:
36reviews total17negative19positive
That low count should not be read as low use. It probably still means many patients experience it simply as "the online form".
Accurx
Accurx is still barely named as a patient-facing brand:
6reviews total2negative3positive1mixed
That is not absence. It is obscurity. Patients often meet the route through the practice website rather than through the product name.
NHS App
The NHS App remains one of the more visible named systems:
142reviews total60negative74positive8mixed
This is another genuinely split platform.
Positive reviews talk about:
- easy contact
- easy booking
- quick responses
- records and prescription convenience
Example:
"I have always found contacting the practice using the NHS app easy and quick with a fantastic response from the staff."
The Sides Medical Practice,9 months ago
Negative reviews talk about:
- messaging being disabled
- no appointments available
- being pushed into another route from the app
- confusion about whether the app is actually meant to work
Example:
"It says on their website that you can contact them via the NHS app but ... messaging [is] disabled"
Heaton Moor Medical Group,a year ago
The NHS App often reads less like a full front door than like a relay point between systems.
The Main Digital Issue Types
1. Speed and convenience when the system works
This is the strongest positive theme by far:
408reviews in the speed/convenience bucket316positive85negative
This is the best case for digital routes.
Patients praise them when they deliver:
- same-day appointments
- same-day callbacks
- quick responses
- a route that works without the
8amphone fight
Examples:
"Using online form for appointment easy and obtained same day appointment"
Chorlton Family Practice,8 months ago"I have always found contacting the practice using the NHS app easy and quick"
The Sides Medical Practice,9 months ago
The positive digital model in the corpus is still very clear: the request goes in, a human responds quickly, and the patient gets seen.
2. Usability and instruction failure
This is one of the clearest negative themes:
112reviews total72negative31positive9mixed
Patients often do not complain about the idea of digital care. They complain that the route is badly explained, badly linked, or awkward to navigate.
Common wording includes:
- no instructions
- hard to find
- not obvious where to click
- difficult to use
- confusing setup
The recurring complaint here is not mainly clinical. It is design failure.
3. No reply, lost submission, or silent failure
This remains one of the worst themes by feel:
54reviews50negative
These are the reviews where the patient does the right digital thing and then nothing happens.
Examples:
"No record of submission using PATCHs."
Chorlton Family Practice,6 months ago
Patients describe:
- no record of submission
- no response after sending
- no callback
- requests disappearing into the system
This is a key trust problem. A bad website is one thing. A form that appears to work and then silently fails is worse.
4. Closed forms, narrow windows, and digital queueing
This is still a smaller but sharp theme:
24reviews22negative
The digital route often reproduces the old 8am rush instead of replacing it.
Patients describe:
- forms only open for a short window
- systems already full by
8am AskMyGPorPATCHSbeing closed- online routes acting like just another queue
So one of the clearest failures is not that software exists, but that scarcity has been turned into a software timer.
5. Triage burden and self-diagnosis pressure
This bucket is still fairly small but distinctive:
16reviews11negative3positive2mixed
The wording here is sharp:
- too many questions
- feeling left to self-diagnose
- triage software seeming inappropriate for urgent need
This is where digital triage tips from inconvenience into distrust.
6. Digital exclusion
This is still a smaller explicit bucket:
14reviews12negative
But it is probably under-stated, because many exclusion complaints are written indirectly rather than with neat keywords.
Examples are still about:
- older patients
- disability
- not being able to use computers or smartphones
- being forced into a route they cannot realistically use
The PATCHS example about an elderly patient being pushed onto a smartphone route is exactly the kind of complaint that makes this visible.
What Patients Infer Even When They Never Name The Platform
Most patients still do not say "this is Accurx" or "this is eConsult". They say:
- the website
- the online form
- the app
- the system
- the link
That means the real comparison in the corpus is still not mainly vendor versus vendor. It is between kinds of patient experience.
Good unnamed digital experience
- request sent quickly
- same-day callback
- same-day appointment
- easier than ringing at
8am - clear enough to use without help
Bad unnamed digital experience
- no instructions
- forced onto a website after the phone fails
- online route only open in a narrow window
- no response after submitting
- confusing handoff between app, practice site, and platform
- feeling forced to self-triage
So the practical patient comparison is still not PATCHS versus AskMyGP in a tidy vendor sense. It is whether the digital route behaves like a quick bridge to care, or like another wall.
Practices Where The Digital Layer Looks Better
The more clearly positive digital clusters in the rebuilt corpus include:
The Brooke SurgeryThe Sides Medical PracticeThe Range Medical CentreAshville SurgeryLADYBARN GROUP PRACTICE- parts of
Chorlton Family Practice
What they have in common in patient reviews:
- lots of same-day or fast-response stories
- online forms described as easy or efficient
- digital routes backed by real human follow-through
- patients sometimes explicitly compare the practice favourably with worse surgeries they have used before
The point is still not the platform alone. It is whether the workflow behind it appears to work.
Practices Where The Digital Layer Looks Worse
The more negative digital clusters now include:
Florence House Medical PracticeThe Robert Darbishire PracticeDelamere Medical Practice- parts of
Ashton Medical Group - practices where the app, website, and phone routes seem to push patients back into each other
What these negative clusters have in common:
- more negative than positive digital mentions
- complaints about the website or form replacing human contact
- confusion about where to go, what link to use, or which system is active
- closed, unavailable, or unresponsive digital routes
- digital systems being experienced as another barrier rather than an easier route in
Bottom Line
The online/web layer in this corpus is real, widespread, and mixed.
The strongest single finding is still that patients usually do not name the software. They talk about a generic digital front door. That front door now shows up in 2,586 reviews and is described in two very different ways.
When it works, patients love it:
- same-day appointments
- quick callbacks
- no need to fight the phone queue
When it fails, they do not talk like software users. They talk like blocked patients:
- the website does not work
- the form is closed
- there is no response
- they are forced into a route they cannot use
- one system sends them to another
- nobody is available except the software
So the real divide here is still not vendor versus vendor. It is whether the digital route is actually connected to care, or whether it is just a new way of being shut out.
Practices With The Strongest And Weakest Digital Appointment Access Signals This report uses the rebuilt review index and looks only at reviews that talk about the digital route into care in a way that is clearly tied to appointments, requests, callbacks, triage, or being seen by a clinician.
Practices With The Strongest And Weakest Digital Appointment Access Signals
This report uses the rebuilt review index and looks only at reviews that talk about the digital route into care in a way that is clearly tied to appointments, requests, callbacks, triage, or being seen by a clinician.
In the rebuilt 40,506-review corpus, this pass finds 1,839 digitally appointment-relevant reviews across 291 practices.
Headline
831reviews read as mainly positive about the digital route into care902read as mainly negative106are mixed
So the digital front door is not a fringe issue. It is large enough to rank practices and produce a real shortlist for manual checking.
What I Looked For
A review was counted here if it mentioned a digital route such as:
website,online,online form,online booking,app,system,online triage- or named systems such as
AskMyGP,PATCHS,eConsult,Accurx, orNHS App
and also linked that route to:
- getting an appointment
- sending a request
- waiting for a callback
- triage
- or actually being seen
Where a review was clearly about digital appointment access but did not use one of the sharper positive or negative phrases, the review rating was used as a fallback signal. So this is broader than a pure keyword tagger, but it is still tied to the digital route into care.
What This Ranking Is Good For
This is a candidate list for the next step: checking what each practice actually uses and how the better and worse digital-access setups compare. It is not a final verdict on any one software product.
Named platform mentions in the top 50 practices:
NHS App:15AskMyGP:6PATCHS:6eConsult:3Accurx:3unknown only:23
Named platform mentions in the bottom 50 practices:
NHS App:16AskMyGP:7PATCHS:4eConsult:4Accurx:1unknown only:21
So the named product alone is still not enough. The workflow around it still matters a lot.
Grounding Examples
Stronger digital-access positives in the corpus look like this:
-
"Submitted my medical request online and two hours later saw a GP. Excellent service. Could not be happier"
Sandris Vilcans,LADYBARN GROUP PRACTICE,3 months ago -
"The reception staff are absolutely fantastic. It’s one of the best GPs I’ve been to. I get a appointment next day. I do a lot of online bookings. They’re really good digital and on the phone as well."
Bryan Fashion,The Quays Practice,10 months ago -
"The online portal made it easy to get an appointment/call from the Team. Who told me to come straight away"
Hilps,Handforth Health Centre,2 years ago
Stronger digital-access negatives look like this:
-
"Three times in the past year i have tried to get an appointment and failed using the triage system. The reception just fob you off with a link then no appointment"
Stephen Hughes,Kearsley Medical Centre,7 months ago -
"Most horrible place ever ... fill in the online form ... the number needed to be changed ... she wouldn't change it"
Selina Faizi,Dickenson Road Medical Centre,3 months ago -
"Elderly mum unable to get an appointment not able to use online form and won’t book an appointment over the phone."
Sam Rothwell,Kearsley Medical Centre,a year ago
Top 50 Practices
These are the strongest practices in this pass for digitally linked appointment access, ranked by the balance of positive vs negative digital-access reviews, with stronger evidence weighted above very thin evidence.
| Rank | Practice | Code | Positive | Negative | Mixed | All relevant | Positive share | Named platform markers |
|---|---|---|---|---|---|---|---|---|
| 1 | LADYBARN GROUP PRACTICE | P84017 |
23 |
2 |
0 |
25 |
92.0% |
NHS App 1 |
| 2 | The Quays Practice | D5B9D |
7 |
0 |
0 |
7 |
100.0% |
NHS App 1 |
| 3 | Handforth Health Centre | N81070 |
5 |
0 |
0 |
5 |
100.0% |
mostly generic website/online wording |
| 4 | Manor House Surgery | C81081 |
5 |
0 |
1 |
6 |
100.0% |
mostly generic website/online wording |
| 5 | The Sides Medical Practice | P87016 |
28 |
4 |
0 |
32 |
87.5% |
NHS App 2 |
| 6 | The Range Medical Centre | P84039 |
16 |
2 |
0 |
18 |
88.9% |
NHS App 3 |
| 7 | The Brooke Surgery | P89002 |
52 |
9 |
0 |
61 |
85.2% |
AskMyGP 24 |
| 8 | The Poplars Medical Practice | P87002 |
9 |
1 |
0 |
10 |
90.0% |
mostly generic website/online wording |
| 9 | Bowland Medical Practice | P84024 |
4 |
0 |
0 |
4 |
100.0% |
NHS App 1 |
| 10 | Cornbrook Medical Practice | P84669 |
4 |
0 |
0 |
4 |
100.0% |
mostly generic website/online wording |
| 11 | The Alexandra Practice | P84650 |
13 |
2 |
0 |
15 |
86.7% |
AskMyGP 1 |
| 12 | Conway Road Medical Practice | P91035 |
3 |
0 |
1 |
4 |
100.0% |
AskMyGP 1 |
| 13 | Marple Medical Practice | P88021 |
3 |
0 |
0 |
3 |
100.0% |
mostly generic website/online wording |
| 14 | Withington Medical Practice | P84665 |
3 |
0 |
0 |
3 |
100.0% |
mostly generic website/online wording |
| 15 | Ashville Surgery | P84038 |
17 |
4 |
1 |
22 |
81.0% |
AskMyGP 15 |
| 16 | Millgate Healthcare Partnership | P89015002 |
56 |
15 |
1 |
72 |
78.9% |
NHS App 2 |
| 17 | The Borchardt Medical Centre | P84010 |
9 |
2 |
2 |
13 |
81.8% |
NHS App 2 |
| 18 | Millgate Healthcare Partnership | P89015 |
55 |
16 |
1 |
72 |
77.5% |
NHS App 2 |
| 19 | Ancoats Urban Village Medical Practice | P84673 |
15 |
4 |
0 |
19 |
78.9% |
mostly generic website/online wording |
| 20 | Cheadle Medical Practice | P88020 |
27 |
8 |
4 |
39 |
77.1% |
NHS App 2, Accurx 1 |
| 21 | Harwood Medical Centre | P82016 |
5 |
1 |
0 |
6 |
83.3% |
eConsult 1 |
| 22 | Hazeldene Medical Centre | P84067 |
5 |
1 |
0 |
6 |
83.3% |
mostly generic website/online wording |
| 23 | Lambgates Health Centre | C81106 |
5 |
1 |
1 |
7 |
83.3% |
Accurx 1 |
| 24 | Woodlands Medical Practice | P85010 |
5 |
1 |
0 |
6 |
83.3% |
eConsult 1, NHS App 1 |
| 25 | City Health Centre | Y02849 |
2 |
0 |
0 |
2 |
100.0% |
mostly generic website/online wording |
| 26 | Lockside Medical Centre | P89005 |
2 |
0 |
0 |
2 |
100.0% |
mostly generic website/online wording |
| 27 | Townside Surgery | P83005 |
2 |
0 |
0 |
2 |
100.0% |
AskMyGP 1 |
| 28 | New Islington Medical Centre | P84064 |
16 |
5 |
0 |
21 |
76.2% |
NHS App 1 |
| 29 | Chorlton Family Practice | P84068 |
38 |
13 |
13 |
64 |
74.5% |
PATCHS 6, NHS App 1 |
| 30 | Norden Branch Surgery | P86006001 |
12 |
4 |
0 |
16 |
75.0% |
PATCHS 4, NHS App 1 |
| 31 | Jalal Practice | P85601 |
4 |
1 |
1 |
6 |
80.0% |
Accurx 1 |
| 32 | Holes Lane Medical Ltd. | N81007 |
7 |
3 |
0 |
10 |
70.0% |
eConsult 5, PATCHS 2 |
| 33 | Family Surgery | P88005 |
3 |
1 |
1 |
5 |
75.0% |
PATCHS 4 |
| 34 | Heywood Health | P86016 |
3 |
1 |
0 |
4 |
75.0% |
NHS App 1, PATCHS 1 |
| 35 | Park View Group Practice | P88018 |
3 |
1 |
0 |
4 |
75.0% |
NHS App 1 |
| 36 | The Reddish Family Practices | P88005001 |
3 |
1 |
1 |
5 |
75.0% |
PATCHS 4 |
| 37 | WASHWAY ROAD MEDICAL CENTRE | P91014 |
3 |
1 |
0 |
4 |
75.0% |
AskMyGP 3 |
| 38 | Archwood Medical Practice | P88625 |
1 |
0 |
0 |
1 |
100.0% |
mostly generic website/online wording |
| 39 | Ardwick Medical Practice | P84037 |
1 |
0 |
0 |
1 |
100.0% |
mostly generic website/online wording |
| 40 | Ashcroft Surgery | P84053 |
1 |
0 |
1 |
2 |
100.0% |
mostly generic website/online wording |
| 41 | Astley General Practice | P92637 |
1 |
0 |
0 |
1 |
100.0% |
mostly generic website/online wording |
| 42 | BARRINGTON MEDICAL CENTRE | P91603 |
1 |
0 |
0 |
1 |
100.0% |
mostly generic website/online wording |
| 43 | Bolton Community Practice CIC - Ladybridge Surgery | Y03079001 |
1 |
0 |
0 |
1 |
100.0% |
mostly generic website/online wording |
| 44 | Bosden Moor Surgery | P88026005 |
1 |
0 |
0 |
1 |
100.0% |
mostly generic website/online wording |
| 45 | Brinnington Surgery | P88043 |
1 |
0 |
0 |
1 |
100.0% |
mostly generic website/online wording |
| 46 | Brunswick Medical Practice | P84611 |
1 |
0 |
0 |
1 |
100.0% |
mostly generic website/online wording |
| 47 | Caritas General Practice Partnership | P88013 |
1 |
0 |
0 |
1 |
100.0% |
mostly generic website/online wording |
| 48 | Cottage Lane Surgery | C81615 |
1 |
0 |
0 |
1 |
100.0% |
mostly generic website/online wording |
| 49 | Crompton View Surgery | P82607 |
1 |
0 |
0 |
1 |
100.0% |
mostly generic website/online wording |
| 50 | David Medical Centre | P84066 |
1 |
0 |
0 |
1 |
100.0% |
mostly generic website/online wording |
Bottom 50 Practices
These are the weakest practices in this pass for digitally linked appointment access, again ranked by the balance of positive vs negative digital-access reviews with evidence weighting.
| Rank | Practice | Code | Positive | Negative | Mixed | All relevant | Positive share | Named platform markers |
|---|---|---|---|---|---|---|---|---|
| 1 | Dickenson Road Medical Centre | P84026 |
0 |
10 |
0 |
10 |
0.0% |
mostly generic website/online wording |
| 2 | Kearsley Medical Centre | P82007 |
0 |
9 |
0 |
9 |
0.0% |
eConsult 1 |
| 3 | Bolton Medical Centre | Y02790 |
0 |
7 |
1 |
8 |
0.0% |
NHS App 2 |
| 4 | The Bolton Family Practice | P82013 |
0 |
7 |
1 |
8 |
0.0% |
NHS App 2 |
| 5 | Guide Bridge Medical Practice | Y02713 |
0 |
6 |
0 |
6 |
0.0% |
NHS App 1 |
| 6 | Stockport Medical Group (Delamere Practice) | P88632001 |
0 |
6 |
0 |
6 |
0.0% |
mostly generic website/online wording |
| 7 | The Leigh Family Practice | Y02322001 |
0 |
6 |
0 |
6 |
0.0% |
NHS App 1 |
| 8 | Culcheth Medical Centre | N81059 |
0 |
5 |
0 |
5 |
0.0% |
eConsult 1, NHS App 1 |
| 9 | Davyhulme Medical Centre | P91009 |
0 |
5 |
0 |
5 |
0.0% |
AskMyGP 1 |
| 10 | Hattersley Group Practice | P89013 |
0 |
5 |
0 |
5 |
0.0% |
PATCHS 2 |
| 11 | Littletown Family Med Pract | P85605 |
0 |
5 |
0 |
5 |
0.0% |
mostly generic website/online wording |
| 12 | Longfield Medical Practice | P83623 |
0 |
5 |
0 |
5 |
0.0% |
PATCHS 1 |
| 13 | The Dunstan Partnership | P82001 |
0 |
5 |
1 |
6 |
0.0% |
NHS App 1 |
| 14 | Valentine Medical Centre | P84019 |
0 |
5 |
0 |
5 |
0.0% |
mostly generic website/online wording |
| 15 | Ailsa Craig Medical Centre | P84009 |
1 |
9 |
1 |
11 |
10.0% |
mostly generic website/online wording |
| 16 | Greenbank Medical Practice | P85021 |
0 |
4 |
0 |
4 |
0.0% |
mostly generic website/online wording |
| 17 | HEALEY SURGERY | P86013 |
0 |
4 |
0 |
4 |
0.0% |
mostly generic website/online wording |
| 18 | Hawthorn MC | Y02890 |
0 |
4 |
0 |
4 |
0.0% |
NHS App 1 |
| 19 | John Street Medical Practice | Y02827 |
0 |
4 |
0 |
4 |
0.0% |
mostly generic website/online wording |
| 20 | Monarch Medical Centre | P83010 |
0 |
4 |
0 |
4 |
0.0% |
AskMyGP 1 |
| 21 | Peel GPs | P83021 |
0 |
4 |
0 |
4 |
0.0% |
NHS App 1 |
| 22 | Rock Healthcare Limited | Y02755 |
0 |
4 |
3 |
7 |
0.0% |
AskMyGP 1 |
| 23 | Simpson Medical Practice | Y02520 |
0 |
4 |
0 |
4 |
0.0% |
NHS App 1 |
| 24 | The Whitswood Practice | P84635 |
0 |
4 |
0 |
4 |
0.0% |
mostly generic website/online wording |
| 25 | West Gorton Medical Practice | P84052 |
0 |
4 |
0 |
4 |
0.0% |
mostly generic website/online wording |
| 26 | Wilmslow Road Surgery | P84626 |
0 |
4 |
0 |
4 |
0.0% |
mostly generic website/online wording |
| 27 | Woodside Medical Centre | P86012 |
0 |
4 |
0 |
4 |
0.0% |
NHS App 1 |
| 28 | New Bank Health | Y02960 |
2 |
11 |
2 |
15 |
15.4% |
NHS App 1, PATCHS 1 |
| 29 | Lime Square Medical Centre | P84059 |
1 |
7 |
0 |
8 |
12.5% |
mostly generic website/online wording |
| 30 | Alkrington Junction Practice | P86010001 |
0 |
3 |
0 |
3 |
0.0% |
NHS App 1 |
| 31 | Ashworth Street Surgery | P86006 |
0 |
3 |
1 |
4 |
0.0% |
mostly generic website/online wording |
| 32 | Conran Medical Centre | P84040 |
0 |
3 |
0 |
3 |
0.0% |
mostly generic website/online wording |
| 33 | Droylsden Medical Practice | Y02663 |
0 |
3 |
1 |
4 |
0.0% |
PATCHS 1 |
| 34 | Eastlands Medical Centre | P84051 |
0 |
3 |
0 |
3 |
0.0% |
NHS App 1 |
| 35 | Gorton Medical Centre | P84028 |
0 |
3 |
1 |
4 |
0.0% |
mostly generic website/online wording |
| 36 | Kingsway Medical Practice | P84022 |
0 |
3 |
0 |
3 |
0.0% |
mostly generic website/online wording |
| 37 | Pikes Lane 1 | P82002 |
0 |
3 |
0 |
3 |
0.0% |
eConsult 1 |
| 38 | Salford Primary Care Together | Y00445 |
0 |
3 |
0 |
3 |
0.0% |
mostly generic website/online wording |
| 39 | Salford Primary Care Together - Eccles Gateway | Y00445001 |
0 |
3 |
0 |
3 |
0.0% |
mostly generic website/online wording |
| 40 | Salford Primary Care Together - Little Hulton | Y00445002 |
0 |
3 |
1 |
4 |
0.0% |
mostly generic website/online wording |
| 41 | West End Medical Centre | P89030 |
0 |
3 |
0 |
3 |
0.0% |
mostly generic website/online wording |
| 42 | Heaton Norris Medical Practice | P88011 |
1 |
6 |
0 |
7 |
14.3% |
mostly generic website/online wording |
| 43 | WEST TIMPERLEY MEDICAL CENTRE | P91016 |
1 |
6 |
0 |
7 |
14.3% |
AskMyGP 2 |
| 44 | Boothstown Medical Centre | P92605 |
2 |
9 |
0 |
11 |
18.2% |
AskMyGP 5 |
| 45 | Limelight Health and Wellbeing Hub | P91020 |
3 |
12 |
1 |
16 |
20.0% |
AskMyGP 7, Accurx 1 |
| 46 | The Lakeside Surgery | N81108 |
2 |
8 |
2 |
12 |
20.0% |
eConsult 6 |
| 47 | Albion Medical Practice | P89003 |
1 |
5 |
0 |
6 |
16.7% |
NHS App 3 |
| 48 | Cornishway Group Practice | P84043 |
1 |
5 |
0 |
6 |
16.7% |
NHS App 1 |
| 49 | Tower Family Healthcare | P83012 |
1 |
5 |
0 |
6 |
16.7% |
AskMyGP 1 |
| 50 | Alexandra Group Med Pract | P85015 |
0 |
2 |
0 |
2 |
0.0% |
NHS App 1 |
Reading This List Carefully
- A practice can rank well here and still have some bad digital reviews.
LADYBARN GROUP PRACTICE,The Sides Medical Practice, andThe Brooke Surgeryall still have some negative digital-access reviews in the corpus. - A practice can rank badly on only a modest number of digitally relevant reviews. That still matters for the manual follow-up step, but it is weaker evidence than a bigger pile of consistently bad digital-access reviews.
- Generic wording still dominates. In many reviews the patient does not name the software, so the next step is still to check the actual appointment/access stack practice by practice.
- This ranking is specifically about the digital route into appointments, requests, callbacks, and being seen. It is not a full ranking of overall practice quality.
Related Allocation Note
The platform-allocation follow-on now sits in its own note: digital-platform-allocation-report.md.
That separate report covers the wider named-platform question across the digitally relevant practice set, while this note stays focused on the top and bottom appointment-access experience ranking.
Named Digital Platform Allocation Across The Digitally Relevant Practice Set This note takes the widened digital-appointment review set and asks a simpler follow-up question: for each digitally relevant practice, do the reviews ever explicitly name the platform being used?
Named Digital Platform Allocation Across The Digitally Relevant Practice Set
This note takes the widened digital-appointment review set and asks a simpler follow-up question: for each digitally relevant practice, do the reviews ever explicitly name the platform being used?
The aim is not to prove the full appointment stack from reviews alone. It is to see how far the corpus lets us allocate practices to named systems such as AskMyGP, PATCHS, eConsult, Accurx, or NHS App, and how much still stays generic as just the website, the online form, the app, or the system.
Coverage
295practices have at least one digitally signalled review121of those can be allocated to at least one named platform from explicit review wording174remainunknown only23show more than one named platform
Put simply: this review corpus lets us allocate about 41.0% of the digitally signalled practices to at least one named system, while about 59.0% still stay unnamed.
Distribution Of Known Versus Unknown
| Allocation bucket | Practices |
|---|---|
| unknown only | 174 |
| NHS App | 56 |
| AskMyGP | 22 |
| PATCHS | 10 |
| eConsult | 8 |
| AskMyGP + NHS App | 7 |
| NHS App + PATCHS | 7 |
| eConsult + NHS App | 5 |
| Accurx | 2 |
| eConsult + PATCHS | 2 |
| Accurx + AskMyGP | 1 |
| Accurx + NHS App | 1 |
The biggest single bucket by far is still unknown only. After that, the most common named allocations are:
NHS Apponly:56practicesAskMyGPonly:22practicesPATCHSonly:10practiceseConsultonly:8practicesAccurxonly:2practices
What The Mixed Cases Look Like
Most multiple-platform cases are still small in number, but they matter because they are likely to be system changes, overlapping routes, or reviews naming both the practice front door and the NHS App.
Recurring combinations:
AskMyGP + NHS App:7practicesNHS App + PATCHS:7practiceseConsult + NHS App:5practiceseConsult + PATCHS:2practicesAccurx + AskMyGP:1practiceAccurx + NHS App:1practice
Examples of multi-platform practices in the reviews include:
Chorlton Family Practice:PATCHSandNHS AppCulcheth Medical Centre:eConsultandNHS AppHeywood Health:PATCHSandNHS AppHoles Lane Medical Ltd.:eConsultandPATCHSLimelight Health and Wellbeing Hub:AccurxandAskMyGPNew Bank Health:PATCHSandNHS App
Satisfaction By System
There are two useful ways to read the system-level numbers:
Any-use: every practice where that named system appears at least once in the reviews, even if the practice also shows another named systemSingle-only: only practices where the reviews point to that one named system and no other named system
The single-only view is cleaner if you want a rougher software comparison without as much contamination from system changes or mixed routes.
| System | Practices any-use | Practices single-only | Positive reviews any-use | Negative reviews any-use | Weighted positive share any-use | Positive reviews single-only | Negative reviews single-only | Weighted positive share single-only |
|---|---|---|---|---|---|---|---|---|
| AskMyGP | 30 | 22 | 126 | 119 | 51.4% | 110 | 80 | 57.9% |
| PATCHS | 19 | 10 | 93 | 87 | 51.7% | 17 | 32 | 34.7% |
| eConsult | 15 | 8 | 44 | 66 | 40.0% | 20 | 40 | 33.3% |
| Accurx | 4 | 2 | 39 | 22 | 63.9% | 9 | 2 | 81.8% |
| NHS App | 76 | 56 | 425 | 353 | 54.6% | 311 | 253 | 55.1% |
What Those System Numbers Suggest
NHS Appis still the most commonly allocatable named route in this dataset, but it appears in both the stronger and weaker practice groups, so it is not a clean quality marker on its own.AskMyGPlooks roughly balanced overall and somewhat better in the cleaner single-only slice than in the mixed any-use slice.PATCHSlooks roughly balanced in the any-use view but weaker in the single-only slice.eConsultlooks weaker than the others in both any-use and single-only review balance.Accurxlooks better in the tiny single-only slice, but that is based on just2single-only practices and should not be over-read.
How Much Of The Ranked Set Is Still Unknown
Even inside the ranked practices, unknowns remain a huge share.
- top 50 practices with no named platform in reviews:
23 - bottom 50 practices with no named platform in reviews:
21
So the next manual step is still necessary. Reviews get us a long way, but they do not solve the whole allocation problem.
Bottom Line
The reviews are good enough to allocate a substantial minority of the digitally relevant practices to named systems, but not most of them. The biggest bucket is still unnamed website/form/app language.
That means the review corpus can already support a first-pass software comparison, but only with caution:
- use
single-onlypractices when you want the cleanest software read - keep
any-usepractices when you want more coverage and more real-world messiness - treat the
unknown onlygroup as a large unresolved block that still needs direct checking practice by practice
This is enough to start building a real quality or satisfaction picture by platform, but not enough to stop doing manual allocation work.
8. GTD-Managed Practices Review Report For PPG Discussion
Prompt
for all of our MD reports in reviews-search/output, I'd like you to write a new report looking at the same issues for only GTD managed practices, using only examples from GTD practice reviews, and in its context. Basically Rather than many reports on the entire corpus, I want a single report looking at the same issues from scratch for only GTD practices (it's about 13 practices iirc) in depth with named/~dated real reviews included. This version will be taken to the practice PPG.
Current file
gtd-managed-practices-ppg-report.md
Current source basis
- consolidated GTD-only reruns of the earlier themes
Current discoveries
- The GTD slice remains much harsher than the wider corpus:
627of830reviews,75.5%, are1or2star. - Access, staff tone, digital front-door problems, weak follow-up, exclusion language, urgent redirection, and weak public response behaviour all show up strongly in GTD-managed practice reviews.
- The heaviest complaint load sits around practices such as
New Bank Health,Guide Bridge Medical Practice,Charlestown MD,Gordon Street Medical Centre,Simpson Medical Practice, andDroylsden Medical Practice. - The report remains a single GTD-only synthesis rather than many separate GTD-only notes.
GTD-Managed Practices Review Report For PPG Discussion
This report is a fresh read of the Google review corpus for GTD-managed practices only.
GTD-Managed Practices Review Report For PPG Discussion
This report is a fresh read of the Google review corpus for GTD-managed practices only.
It does not summarise the wider reports. It starts again from the GTD slice alone and asks the same broad questions:
- what patients mainly complain about
- what they say about staff and care
- what happens with digital routes
- where the strongest warning signs of harm appear
- how people talk about trying to leave or being stuck
- how practices respond in public
- what seems to have changed over time
All examples in this report come from GTD-managed practice reviews only.
The GTD Slice
The GTD-managed review set in this rebuilt index still contains 830 reviews across 13 practices:
Ashton Gp ServiceCharlestown MDCity Health CentreDroylsden Medical PracticeGordon Street Medical CentreGuide Bridge Medical PracticeHattersley Group PracticeLindley House Health CentreMillbrook Medical PracticeMossley Medical PracticeNew Bank HealthSimpson Medical PracticeThe Smithy Surgery
The overall rating shape is harsh:
602reviews are1star25are2star18are3star33are4star152are5star
That means 627 reviews, 75.5%, are 1 or 2 star.
This is not a balanced-looking slice. It is a strongly complaint-heavy one, sitting inside the rebuilt 40,506-review corpus but staying tightly focused on GTD-managed practices only.
Headline
The main GTD story is not one single bad thing. It is a repeated patient feeling that the front door is hard to get through, hard to trust, and often pushes people somewhere else.
Access dominates the GTD review set, but it does not stop at access. The complaints spill into:
- rude or blocking reception
- missed follow-up and prescription drift
- online systems that feel like another barrier
- feeling dismissed when unwell
- being sent to walk-in centres, hospitals, or
A&E - very weak public responses when patients complain
The heaviest public complaint load falls on New Bank Health, Guide Bridge Medical Practice, Charlestown MD, Gordon Street Medical Centre, Simpson Medical Practice, and Droylsden Medical Practice, but the slice is not completely flat. There are a few clearer positive pockets, especially in City Health Centre, The Smithy Surgery, and parts of Mossley Medical Practice. There are also some positive reviews in more complaint-heavy practices where patients praise a specific doctor, nurse, or receptionist.
So the picture is not "every GTD practice is identical". It is more that the negative weight is very strong, and it falls on a familiar set of problems.
1. Access Is Still The Core Problem
Using a simple fulltext pass over GTD reviews, I found:
366GTD reviews mentioning appointment problems289mentioning phone access278mentioning reception137mentioning follow-up, referrals, results, or prescriptions71mentioning online or website routes
If I narrow those to 1 and 2 star GTD reviews only:
304low-star GTD reviews mention appointment problems259mention phone problems218mention reception120mention follow-up or referrals60mention online or website routes
This is the main GTD pattern. Patients are not usually writing about one isolated bad moment. They are writing about a route into care that feels blocked at several points at once.
Examples:
"Booking an appointment is an impossible task."
Chloe Bradshaw,Gordon Street Medical Centre,2 days ago"Every question that you have go online. Avoid"
John Apetri,New Bank Health,4 years ago"I have never been able to get an appointment in less than a week and usually have to use the walk in centre."
Butterfly Effect Transformation,Guide Bridge Medical Practice,8 years ago"Worst experience. Weak staff."
John Apetri,New Bank Health,4 years ago
The GTD access problem is not just high demand in the abstract. In the reviews it is described as a chain:
- the phone is hard to get through on
- appointments are gone
- reception pushes people elsewhere
- the online route is confusing or shut
- follow-up does not land cleanly afterwards
2. Reception And Staff Tone Matter A Lot
The GTD slice is full of complaints about being spoken to badly, brushed off, or made to feel like a nuisance.
I found 188 GTD reviews mentioning staff-attitude language, 177 of them in 1 and 2 star reviews.
This is not just a manners issue in the reviews. Patients often describe tone as part of being denied care.
Examples:
"Reception have an awful attitude problem."
dale garbutt,Gordon Street Medical Centre,a year ago"the admin/reception team here are something else"
Hafsa Bakari,New Bank Health,2 years ago"The receptionist is useless and unhelpful."
Siem,Simpson Medical Practice,a year ago"The receptionists are the worst I have ever experienced."
Kirsty M,Hattersley Group Practice,a year ago
That said, the GTD reviews also show that patients notice individual staff who make things easier.
Examples of positive staff experience:
"Thea is an amazing doctor who follows up appointments and makes sure she’s treating problems correctly"
Belinda Bayley,Ashton Gp Service,3 years ago"Very accessible, I was able to get an appointment quickly. Staff and doctors were friendly and very helpful."
Shehab Samaha,City Health Centre,2 months ago"Kathryn on reception is always very helpful when I phone"
Angels,The Smithy Surgery,Edited 2 years ago
So one thing the PPG should keep in mind is that patients are not writing as though every member of staff is the problem. Quite often they separate "the system" from the one or two people who still help them through it.
3. Follow-Up, Prescriptions, Results, And Referrals Drift
This is one of the clearest secondary themes after access.
I found 137 GTD reviews mentioning follow-up, results, referrals, or prescriptions, including 120 low-star reviews.
What patients describe is drift:
- prescriptions not landing
- results not coming back
- referrals being delayed or doubted
- no callback after a promised contact
Examples:
"Sent repeat prescription request via patches and got wrong medication prescribed 3 times in a row(!)"
Ewelina Hartung,Gordon Street Medical Centre,4 months ago"There have been multiple mistakes in my notes and results recently, and no one has responded to the patches I raised."
Kierron Byron,Gordon Street Medical Centre,6 months ago"it took them 3 months to get my results"
A E,Charlestown MD,2 weeks ago"lost a important letter from hospital and if i'd not chased it up a really important referral to another hospital wouldn't have occurred"
Stephen Brewster,Hattersley Group Practice,a year ago
In the GTD slice, this follow-through problem is important because it often appears after the patient has already fought through the first access barrier.
4. Digital Routes: More Generic Website Complaints Than Named Platforms
The GTD digital picture is surprisingly narrow by name.
After stripping practice-response text, I found:
49GTD reviews mentioning a generic online or website route8mentioningPATCHS3mentioning theNHS app- no meaningful patient-side GTD mentions of
AskMyGP,Accurx, oreConsult
That means the GTD digital story is mostly not about branded platforms. Patients mainly say:
- the website
- online
- the app
- the system
- the online form
This matters because the lived experience is not "I dislike vendor X". It is "the only route left to me is a web route that does not work well enough."
The GTD-specific digital ranking also lines up with that reading. In the wider digital appointment pass, Guide Bridge Medical Practice, Hattersley Group Practice, New Bank Health, Droylsden Medical Practice, and Simpson Medical Practice all sit on the weaker side of the digital-access table rather than the stronger side.
Examples:
"they keep referring go online book online that's the procedure you have to follow"
mediawise TM,New Bank Health,4 years ago"The Patchs system you have to use to make requests is awful"
Mark Gambon,Hattersley Group Practice,3 months ago"The online app always says no appointments"
Simon Thomas,Hattersley Group Practice,8 months ago"The AI was asking me follow up questions ... and it's \"amazing\" advice was to go to A+E"
Theodore Morrissey,Ashton Gp Service,10 months ago
The GTD-specific digital issue buckets are not huge in raw count, but they are sharp:
5GTD reviews about usability or instruction failure3about triage burden or self-diagnosis2about no reply or lost request1about forms being closed or filled up1about explicit digital exclusion
There are also a few positive digital reviews in GTD practices, mostly where online contact leads quickly to a same-day result.
Examples:
"My appointments are getting way easier when I request online! Quick response when you do online."
Naga Manickam,Charlestown MD,Edited 3 years ago"The patches system is very good and easy to use"
Fiona Simpson,Simpson Medical Practice,4 months ago
But in the GTD slice, positive digital stories are clearly outweighed by the feeling that online is replacing access rather than improving it.
5. Exclusion: Patients Try To Leave, Or Find They Cannot
The GTD slice also shows what happens after patients lose trust.
Examples:
"Register elsewhere."
Siem,Simpson Medical Practice,a year ago"I’ve changed doctors as this place is nothing short of a joke."
Genuine Bloodshed,Guide Bridge Medical Practice,2 years ago"I was deducted from the practise due to my postcode being out of their catchment area without my knowledge"
Hafsa Bakari,New Bank Health,2 years ago"there is no option but to register elsewhere"
HG,The Smithy Surgery,Edited 4 months ago
The main point here is not just that some patients leave. It is that some feel trapped until they can leave, or are moved around by catchment and postcode rules rather than by successful care.
6. Clinical Warning Signs Are Present In The GTD Slice
Using the same harm-signal scan as before but only on GTD low-star reviews, I found 48 flagged GTD reviews with a self-reported clinical warning sign.
Within those GTD harm-signal reviews:
22mention hospital or urgent escalation14combine dismissal or not being listened to with a bad outcome10link delay or postponed care to harm5mention misdiagnosis or wrong diagnosis3mention wrong or unsafe medication3mention a serious condition or near miss
The GTD practices with the most flagged harm-signal reviews are:
| Practice | Harm-signal reviews |
|---|---|
Ashton Gp Service |
6 |
Guide Bridge Medical Practice |
6 |
New Bank Health |
6 |
Charlestown MD |
5 |
Droylsden Medical Practice |
5 |
Gordon Street Medical Centre |
5 |
Lindley House Health Centre |
5 |
Examples:
"I had lost 50% of my blood to an internal bleed"
Genuine Bloodshed,Guide Bridge Medical Practice,2 years ago"got wrong medication prescribed 3 times in a row"
Ewelina Hartung,Gordon Street Medical Centre,4 months ago"Please don't go to this GP if you care about yourself."
Sana Mahdavi,New Bank Health,4 years ago, after saying wrong medication made her skin problem worse"this practise nearly cost me my life ... then ended up on life support"
JACOB TAYLOR,Hattersley Group Practice,7 years ago"Never answer phone are rude and have gave wrong medication to my children numerous times"
Jack Cunningham,Charlestown MD,6 years ago
This is not the biggest GTD theme by volume, but it is too frequent to treat as a handful of outliers.
7. Public Practice Responses In The GTD Slice Are Weak
The GTD response layer is much thinner and much more standardised than the wider corpus.
Across GTD reviews:
164of830reviews have a practice response- that is only
19.8%of GTD reviews - only
42positive GTD reviews have responses 116negative GTD reviews have responses
But the more important part is what those responses actually say.
In the GTD slice I found:
115responses with boilerplate signposting109bad negative responses1response with a clear sign of specific action0clearly useful negative responses in the stricter sense used in the wider response report
In plain terms: GTD public responses are mostly apology-plus-redirection.
Typical wording:
"please either contact the practice directly and ask to speak to the practice manager or contact our patient services team via gtd.feedback@nhs.net"
This wording appears again and again, especially in Gordon Street Medical Centre responses.
Examples:
"Your comments have been passed on to the practice manager."
GTD response to Ewelina Hartung,Gordon Street Medical Centre,4 months ago"If you would like to discuss your feedback further, please either contact the practice directly and ask to speak to the practice manager or contact our patient services team"
GTD response to Meg Wood,Gordon Street Medical Centre,4 months ago
The GTD response pattern matters because it does not publicly tell patients or PPG members what changed. It mainly pushes the patient back into another route.
By practice, GTD response coverage is uneven:
| Practice | Response rate | Negative response rate |
|---|---|---|
The Smithy Surgery |
61.9% |
75.0% |
Ashton Gp Service |
42.0% |
36.8% |
Droylsden Medical Practice |
41.9% |
47.4% |
Gordon Street Medical Centre |
25.6% |
29.0% |
Guide Bridge Medical Practice |
21.9% |
22.5% |
New Bank Health |
3.6% |
1.8% |
Charlestown MD |
0.0% |
0.0% |
Millbrook Medical Practice |
0.0% |
0.0% |
So the GTD response problem is not only what is said. In some places it is also silence.
8. What Has Changed Over Time In GTD Reviews
The GTD slice shows the same broad shift as the whole corpus, but in a more concentrated form.
Among GTD 1 and 2 star reviews:
91sit in the older2016-2019window443sit in the recent2022-2026window
The main shift is not that older complaints disappear. It is that digital and remote barriers are layered on top of the older problems.
In the GTD low-star slice:
- digital-language complaints rise from
1older review to42recent reviews - face-to-face complaint language rises from
0to9 - phone complaint language rises from
9to30 - walk-in-centre language is present in both periods, from
10older to31recent
Older GTD reviews sound more like this:
"Never got an appointment!!!! Always regarded to Walk-in centre or A&E"
Veroca Hlavaca,Simpson Medical Practice,8 years ago"It’s very difficult to get an appointment. I’ve ... been to the walk in centre more times, than I’ve seen a doctor at this surgery."
Jo w,Droylsden Medical Practice,8 years ago
Recent GTD reviews sound more like this:
"go online book online that's the procedure you have to follow"
mediawise TM,New Bank Health,4 years ago"The online app always says no appointments"
Simon Thomas,Hattersley Group Practice,8 months ago"actually getting an appointment face to face is near impossible"
Nikki Rostron,Simpson Medical Practice,3 years ago
So the GTD story over time is not improvement. It is the older access bottlenecks staying in place while the newer digital and triage layers sit on top of them.
9. Not All GTD Practices Look The Same
The complaint-heavy picture is strong, but there are clear exceptions and lighter patches.
Best-rated GTD practices in this corpus are:
| Practice | Reviews | Average rating |
|---|---|---|
City Health Centre |
10 |
4.90 |
The Smithy Surgery |
21 |
4.05 |
Mossley Medical Practice |
13 |
3.69 |
Examples from the more positive side:
"Most of my queries are addressed on the same day"
Rosanna Respecta,City Health Centre,2 months ago"I was able to get everything sorted on the day."
Shehab Samaha,City Health Centre,2 months ago"I moved to this surgery last year, I wish I had done it years ago"
Kathleen Ford,The Smithy Surgery,4 years ago"Speedy appointments, great communication and generally very personable."
Jessica Mcglynn,Mossley Medical Practice,2 years ago
There are also partial positives inside otherwise harsh practices:
"Thea is an amazing doctor"
Belinda Bayley,Ashton Gp Service,3 years ago"Doctors are great!"
Angela,Gordon Street Medical Centre,Edited 11 months ago, before describing the rest of the system as failing
This matters for PPG discussion because patients are often distinguishing between individual staff effort and the wider system around them.
10. Practice-Level Weight Of Complaint
By low-star volume, the heaviest GTD complaint load in this corpus falls on:
| Practice | Low-star reviews | High-star reviews | Total |
|---|---|---|---|
New Bank Health |
109 |
24 |
138 |
Guide Bridge Medical Practice |
80 |
15 |
96 |
Charlestown MD |
74 |
22 |
97 |
Gordon Street Medical Centre |
69 |
9 |
78 |
Simpson Medical Practice |
67 |
13 |
81 |
Droylsden Medical Practice |
57 |
16 |
74 |
Ashton Gp Service |
57 |
11 |
69 |
That does not prove these are the worst practices in every real-world sense. But it does show where the strongest public dissatisfaction sits in the GTD review layer.
Bottom Line
The GTD-managed slice of the review corpus is dominated by one repeated patient experience:
patients struggle to get through the front door, feel blocked or worn down by the route in, and often say the system pushes them elsewhere rather than resolving the problem.
The strongest GTD themes are:
- appointments that feel impossible to get
- phone systems that lead nowhere
- reception behaviour described as obstructive or disrespectful
- weak follow-up on prescriptions, results, and referrals
- online routes that often feel like another barrier rather than an easier way in
- a smaller but serious layer of reviews describing harm, deterioration, or urgent escalation
- very weak public response quality from practices, with heavy use of boilerplate and almost no public evidence of change
The GTD slice is not wholly negative. Some practices, and some individual staff inside harder-pressed practices, are clearly valued by patients. But the balance of the reviews is not close. The complaint load is heavy, repeated, and unusually concentrated around access, follow-through, trust, and being pushed out of the practice route rather than helped through it.
For PPG discussion, the practical question is probably not "is there a problem?" The reviews already answer that.
The practical questions are:
- where is the front door breaking most often
- where are patients being redirected instead of treated
- where are prescriptions, results, and referrals drifting
- where is digital access helping, and where is it just replacing one blocked queue with another
- what public evidence of learning or change can practices actually show back to patients
9. Activism, Community Response, And Public-Warning Reviews
Prompt
a fringe issue that is worth checking in the broader corpus and writing up a new specific short report about is activism/community-response/busy-bodies presence in the dataset, like on the whole patients are reasonably angry (and often clearly angry but trying to be reasonable) but I'm looking for reviewers that might be from or about people trying to change the systems, and how they're doing, community integration or support and anywhere people seem to be trying to communicate about practices in this context, not stindividual health-related issues, but basically I want to know if our corpus has any obvious 'weekend warriors' or 'chatgpt activists' (I know there is at least 1 in the dataset, me, so I'm curious if you can turn up more, or even if you miss that one, btu I think you'll spot it)
Current file
activism-community-response-report.md
Current source basis
analyze_activism_presence.py
Current discoveries
- Activism-style reviewing remains a real but fringe layer rather than a dominant one.
- The refreshed scan finds
1,625flagged reviews,4.0%of the corpus. - The biggest bucket is still public-warning language aimed at other patients, followed by regulator escalation, review-about-review writing, authority-positioning, and community framing.
- The report still reads this as scattered public warning and escalation rather than sustained local organising.
Activism, Community Response, And Public-Warning Reviews This is still a fringe layer in the corpus, but it is real.
Activism, Community Response, And Public-Warning Reviews
This is still a fringe layer in the corpus, but it is real.
Using a text scan for regulator escalation, public-warning language, review-about-review language, professional self-positioning, and community framing, I now find 1,625 reviews out of 40,506 with at least one of those markers: 4.0% of the whole corpus.
That broad number still includes a lot of simple "avoid this place" warnings. It does not mean there is a large organised movement sitting inside the data. It means a visible minority of reviewers are trying to do something more public than describing one bad appointment.
What This Looks Like
Most of these reviews are still not organised campaigning in any sustained sense. They are usually lone reviewers trying to do one or more of five things:
- create a public warning for other patients
- turn a bad experience into a formal complaint trail
- recruit outside attention from CQC, PALS, the ombudsman, NHS England or an MP
- speak as more than one isolated patient, using phrases like "local community", "patients deserve", or "not only me"
- answer the existing review culture around a practice, either by backing up the pile-on or pushing back against it
So yes, there are activist-style reviews here, but they are scattered and episodic rather than a large organised movement.
Main Patterns
1. Public-warning reviews are the biggest fringe pattern
The largest bucket is now 851 reviews with direct warning or mobilisation language. Most are negative and aimed at other patients rather than the practice itself.
Typical wording:
- "Please stay away from this surgery if you have health concern"
Amjad Sharif,Corkland Road Medical Practice,2 years ago - "Poor doctors, I would advice all to deregister"
moee 687,Corkland Road Medical Practice,3 years ago - "I strongly advise de-registering"
Erandi Samaraweera,Alkrington Junction Practice,2 months ago
This is the most common form of activism-adjacent writing in the corpus: not group organising, but public consumer warning.
2. Explicit regulator escalation exists and is easy to spot
There are now 329 reviews with clear regulator or formal-escalation markers. Within that:
- the biggest visible pattern is still
CQC - the wider set also includes ombudsman, NHS England, MP, complaint, and similar escalation language
These are some of the clearest examples of people trying to move beyond ordinary reviewing:
- "SUBMIT YOUR REVIEW TO THE CQC, ITS BEEN 5 YEARS SINCE BEEHIVE WAS INSPECTED, WE NEED TO GET THEM SHUT DOWN!"
Michael Mckechnie,Beehive Surgery,Edited 2 years ago - "I'll be writing a complaint to the practice manager, the CQC, the NHS, and my MP."
Matthew Ritson,Unsworth Medical Centre,3 years ago - "I notified IBC and ICO and will follow up to NHS England and the Ombudsman so hopefully they don't do that to anyone else."
Hristo Rankov,Woodbank Surgery,Edited 2 months ago
This is the strongest evidence that some reviewers are trying to build external pressure, not just leave a one-off rant.
3. Some reviews are written about the review culture itself
There are now 200 reviews that refer to "other reviews", "these reviews", "negative reviews", or otherwise treat the Google page as a public record in its own right.
This splits both ways.
Solidarity with previous complainants:
- "As mentioned in numerous other reviews the receptionist is extremely rude"
S B,New Bank Health,5 years ago - "look at other reviews of this place you will see its not only me who thinks this."
s j,Pikes Lane 1,Edited 7 years ago
Counter-public or defence:
- "Just to provide a dissenting voice to the negative reviews"
Saw Naw,Charlestown MD,Edited 4 years ago - "I feel the need to write this review due to the many unnecessary unfair reviews."
Paul,Little Lever Health Centre 2,6 months ago
This matters because it shows some practices develop a visible review reputation, and later reviewers start writing into that argument rather than just reporting one appointment.
4. Defence reviews and "the NHS is under pressure" language are real, but still small
This is a narrower layer than the broader review-about-review bucket above.
In a tighter follow-up scan, I found only 8 clearly explicit defence reviews that directly argue with earlier reviewers in terms like:
- "Totally disagree with the negative reviews"
- "Just to provide a dissenting voice"
- "Disregard the negative reviews"
- "people are quicker to post negative experiences than positive"
Examples:
- "Just to provide a dissenting voice to the negative reviews"
Saw Naw,Charlestown MD,Edited 4 years ago - "Totally disagree with the negative reviews."
J O'C,Five Oaks Family Practice,5 years ago - "I do not understand the negative reviews for this practice ... if you don't like it join bupa!!!!"
Christopher Quinn,1/Monton Medical Practice,7 years ago
That is tiny in corpus terms. It is visible, but it is not a major current inside the dataset.
There is, however, a slightly broader system-defence layer around phrases like "under pressure", "trying their best", "not their fault", or "credit to the NHS". A tighter scan for that wording found 93 reviews, mostly positive.
Typical wording:
- "I can't imagine how much pressure they are all under at the moment"
s l,Charlestown MD,4 years ago - "Yes it can be difficult ... GP shortages, NHS cuts. It's not their fault."
Zareen Manjra,Pikes Lane 1,6 years ago - "Whilst I am upset with the NHS system as a whole ... the people within this doctor's surgery truly seem to be doing their best"
A D,LADYBARN GROUP PRACTICE,2 years ago
So the bigger pattern is not really reviewer tribalism. It is more often people reframing a local problem as part of wider NHS pressure and trying to shift some blame away from practice staff.
5. Overt culture-war bleed is present, but very thin
This was the harder thing to spot, and after testing it more directly my reading is still that it is small.
A broad politics-or-decline keyword sweep can pull just over a hundred reviews with words like government, Tories, underfunding, taxpayers, foreigners, or political agenda. But most of those are not culture-war rallies in the social-media sense. They are usually one of three things:
- blaming government underfunding for a local service problem
- saying staff are doing their best despite national decline
- occasional hostile remarks about migrants, foreigners, or politics
Examples:
- "If the tories actually funded the NHS ... there may be better reviews."
AB,Cheetham Hill Medical Centre,3 years ago - "In my opinion, she has a political agenda to try and make the NHS appear in as poor a light as is possible"
Nigel Hall,Tower Family Healthcare - Greenmount,10 months ago - "Unfortunately it is hard to get an appointment here due to so many foreigners."
J,Valentine Medical Centre,8 months ago
The main thing to say here is what does not appear much.
In direct checks for harder-edged culture-war or extremist language, I found:
0hits formarxist0hits forcommunist0hits forleft wing0hits forright wing0hits forculture war0hits fordeep state
Even woke turned out to be useless as a signal here, because the hits were things like "woke up".
So yes, some politics leaks into the reviews. But it does not look like the review corpus has been seriously colonised by the wider online culture war. The bleed-through is real, but thin.
6. NHS politics does show up, but mostly as blame or defence framing
There is a visible but still fairly small layer where reviewers stop talking only about one practice and start talking about the NHS as a system.
In a tighter follow-up scan, I found about 97 reviews with this kind of NHS politics framing. Most of them are not policy arguments in any deep sense. They are usually one of two things:
- defending practice staff by blaming
NHS cuts,underfunding,GP shortages, or wider government failure - attacking the practice while still framing it as part of a broken national system
Examples:
- "Yes you have to wait for appointments ... GP shortages, NHS cuts. It's not their fault."
Zareen Manjra,Pikes Lane 1,6 years ago - "Whilst I am upset with the NHS system as a whole, primarily due to the underfunding and continued privatization by the Tories..."
A D,LADYBARN GROUP PRACTICE,2 years ago - "The sooner the NHS is privatised the better off we will all be."
chris lowton,Culcheth Medical Centre,2 years ago
So this layer is real, but it is still usually about assigning blame or defending local staff, not about arguing through NHS reform in a sustained way.
7. Private healthcare discourse is more practical than ideological
This was one of the clearer follow-ups.
Using a tighter scan for go private, went private, private doctor, private health, private referral, private clinic, and Bupa, I found about 105 reviews with a meaningful private-healthcare signal. These skew strongly negative.
The dominant pattern is not admiration for the private sector. It is people saying they were pushed toward it:
- because the GP route failed
- because a referral stalled
- because the wait was too long
- because they felt they had no other option left
Examples:
- "I had to spend all my savings on a private doctor just for a prescription"
Jean murray,Holes Lane Medical Ltd.,a month ago - "Had to go private to get any assistance."
Chloe Wilkins,Stockport Medical Group,2 years ago - "One visit ... and then told to go private"
andy stephenson,Kearsley Medical Centre,4 years ago - "I have now switched to BUPA because of how concerned I was by the incompetence of this medical centre."
Matt Goddard,St Johns Medical Centre,2 years ago
There are some positive or neutral private-health mentions, but they are the minority. Most of this is exit language, not endorsement.
8. Explicit market or privatisation talk is rare
If the question is whether the review corpus contains a lot of explicit discussion about competition, tendering, market logic, or privatisation, the answer is not much.
I found only 11 explicit privatisation mentions and about 28 wider market-or-contract style hits, many of which are noisy rather than genuine policy discussion.
The explicit examples split both ways:
- anti-privatisation:
- "Stop moaning ... and tell your MP to do something about the stealth privatisation of the NHS"
Harjem 2007,Barlow Medical Centre,3 years ago - "the surgery seems to back the ongoing privatisation of the NHS"
Rachel Armstrong,Bodey Medical Centre,8 years ago - pro-privatisation or market-exit frustration:
- "The sooner the NHS is privatised the better off we will all be."
chris lowton,Culcheth Medical Centre,2 years ago
The main thing to say is that the corpus does not contain much thoughtful market-design discourse. Patients rarely talk like health-policy analysts. What they talk about is lived consequence:
- being forced private
- waiting too long
- blaming underfunding
- or arguing that local staff should not carry all the blame
So the reviews do touch the politics of delivery, but mostly through experience and frustration rather than through clear ideological positioning.
9. Insider or authority-positioned reviews are rare but real
Only 54 reviews clearly position the writer as a doctor, healthcare professional, GP receptionist elsewhere, or similar.
Examples:
- "I am writing this review as both a medical doctor and a parent"
Erandi Samaraweera,Alkrington Junction Practice,2 months ago - "I work as a GP receptionist elsewhere and I've never seen such incompetence."
Declan Daly,Cheetham Hill Medical Centre,a year ago - "As a healthcare professional myself I have first hand witness the distress and anxiety this incompetent surgery team cause my mother."
HelloClarice,Rock Healthcare Limited,4 years ago
This is the closest thing in the corpus to a recognisable "formal advocate" voice. It is unusual, but it does recur.
10. Community framing exists, but it is small and split
The broader community-or-collective framing bucket now contains 294 reviews. That is much bigger than the tiny older local community phrase count, but it is still a minority layer.
This broader bucket includes language about:
- the local community
- other patients
- vulnerable people
- children, elderly relatives, or whole families
- people being warned to register elsewhere or avoid the practice
The tone is mixed rather than one-sided.
Positive versions:
- "A great service to the local community"
T Lau,Fairfax Group Practice,9 years ago - "Fantastic centre for the local community with everything under one roof."
Lisa Mack,Limelight Health and Wellbeing Hub,a year ago
Negative versions:
- "Very poor service for the local community, no appointments when needed."
danny knight,The Whitswood Practice,9 years ago - "Its clear that the health and well being of the local community is not at all the priority here."
Stephen Palmer,Kearsley Medical Centre,5 years ago
There are still small signs of people talking about community communication outside Google itself:
- "many moor complaints on the app nextdoor this app is a community app all local people."
David Tranter,Tower Family Healthcare,3 years ago
So the corpus does contain some community-facing language, but not much evidence of durable, structured local organising.
Where This Shows Up Most
By raw count, the practices with the most activism-adjacent or public-warning reviews in this refreshed scan are:
The Robert Darbishire Practice: 23 flagged reviewsHEALEY SURGERY: 21New Bank Health: 20Hawthorn MC: 20Cheetham Hill Medical Centre: 19Beehive Surgery: 17Limelight Health and Wellbeing Hub: 17Lees Medical Practice: 15
This does not mean all of these have organised campaigns around them. Usually it means repeated public-warning behaviour, repeated references to formal complaints, or reviewers talking to and about each other through the review page.
Bottom Line
The corpus does have a visible fringe of reviewers who are trying to do more than describe their own care. They warn other patients, invoke regulators, compare notes with earlier reviewers, and sometimes write from an insider or campaign-style position.
But this is still a minority layer. The typical Google review in this dataset is still a personal account, not a piece of organised activism. The more activist or public-pressure style is present, recognisable, and worth watching, but it is thinly spread and mostly shows up as public warning plus escalation, not as long-running community organisation.
The same goes for culture-war spillover. There are some reviews arguing with other reviewers, some trying to defend staff by pointing to NHS pressure, and a few overtly political or hostile remarks. But the corpus does not look dominated by that register. It looks like a real-world patient review set with a small amount of online-political bleed, not a review space taken over by it.
The same is true of NHS politics and private-healthcare talk. Those themes are present, but they are usually grounded in immediate patient experience rather than abstract ideology. The private-healthcare strand mostly reads as forced exit or fallback. The NHS-politics strand mostly reads as blame, defence, or system frustration. Explicit market-competition thinking is there only in trace amounts.
Method Note
This was a text-pattern scan over the rebuilt indexed review corpus. It looked for:
- regulator and complaint escalation terms
- public-warning or de-registration language
- references to other reviews and the review page as a public record
- self-positioning as a doctor, healthcare worker, or similar authority
- community-facing framing such as
local communityor broader patient-interest language
For the added culture-war and defence-review section, I also did a narrower follow-up scan for:
- explicit review-against-review language such as
negative reviews,dissenting voice, orrestore the balance - system-defence phrases such as
under pressure,trying their best, ornot their fault - a much tighter set of overt political or hostile terms to test whether broader online culture-war language was materially present here
- tighter
NHS politicsframing such asunderfunding,NHS cuts,GP shortages,government, andTories - tighter private-healthcare language such as
go private,private doctor,private health,private referral,private clinic, andBupa
It is a useful way to surface this fringe pattern, but it is intentionally rough. The broad count is best read as activism-adjacent presence, not a strict count of formal activists.
Catchment Terrain Reports
These two England-only markdown reports sit outside the review-text corpus reports. They use the England polygon catchment cache plus the published practice metrics to summarise random-practice quality odds and catchment-size distribution.
England Random Good Practice Chance Generated: 2026-03-27 05:10 UTC
England Random Good Practice Chance
Generated: 2026-03-27 05:10 UTC
This report is England only.
Reason: the healthcare-terrain hard catchment source currently exists only for the England GP catchment cache, so this report narrows the earlier broad UK-style question to the England practice pool.
Headline
If you model the question as picking an England GP practice at random with no prior research, the best simple answer in this dataset is 59.6% if "good" means GP Patient Survey overall-good >= 75%.
The patient-weighted version of the same question is 55.1%.
Metric Notes
- Primary metric used for the headline:
survey_overall_good_percent >= 75 - England practices in combined published dataset:
6,224 - Survey coverage in England rows:
6,172/6,224=99.2% - Google score coverage in England rows:
6,135/6,224=98.6%
Alternative Reads
- Survey-defined good, random practice:
59.6% - Survey-defined good, patient-weighted:
55.1% - Google
>= 3.75(the direct75% -> 3.75 starsmapping used in the survey/Google gap view), random practice:27.4% - Google
>= 4.0only, random practice:21.0% - Google
>= 4.0with at least10reviews, random practice:19.1% - Survey-defined good among England practices with survey data present:
60.1%
Plain-English Read
Using the survey-based definition, England looks roughly like a 6-in-10 random-practice chance of landing on a good practice, or about 55% if you weight by patient counts instead of by practice count.
The stark contrast is Google: even if you soften the Google cutoff to the direct survey-equivalent threshold of 3.75 stars, the random-practice chance is only 27.4%. At the stricter 4.0-star cutoff it drops to 21.0%.
So the important directional point is not subtle: in England it looks fairly common to be structurally near (in the catchment-system sense) a practice with good patient-survey results, but much rarer to be near one that looks good on Google ratings.
England Catchment Area Buckets Generated: 2026-03-27 05:10 UTC
England Catchment Area Buckets
Generated: 2026-03-27 05:10 UTC
This report is England only and uses the hard polygon catchment cache under datasets/catchments/.cache/gp-catchments-england/by_practice/.
Area Method
- Each practice area is the sum of all polygon / multipolygon feature parts in its England catchment cache file.
- Area is calculated directly from lon/lat rings using a spherical polygon-area approximation with the Web Mercator Earth radius used elsewhere in the healthcare-terrain tooling.
- Invalid or non-ODS cache filenames are excluded from the ranked pool.
Distribution Summary
- Valid England catchments ranked:
7,650 - Minimum area:
0.001km² - Median area:
22.86km² - 90th percentile area:
158.6km² - Maximum area:
9,994.2km²
Bucket Design
Buckets use human-readable round-number area bands instead of equal-count splits. The aim is to show the real shape of the England catchment spread in ranges that are easy to think about, even if that means the counts are front-weighted and the long tail stays visible.
Full member codes are exported separately to england-catchment-area-bucket-members.tsv as TSV cells, one row per bucket.
| Bucket | Members | Share | Range |
|---|---|---|---|
| 1 | 92 | 1.2% | <= 1 km² |
| 2 | 199 | 2.6% | > 1 to <= 2 km² |
| 3 | 737 | 9.6% | > 2 to <= 5 km² |
| 4 | 1,037 | 13.6% | > 5 to <= 10 km² |
| 5 | 1,487 | 19.4% | > 10 to <= 20 km² |
| 6 | 1,806 | 23.6% | > 20 to <= 50 km² |
| 7 | 985 | 12.9% | > 50 to <= 100 km² |
| 8 | 779 | 10.2% | > 100 to <= 200 km² |
| 9 | 487 | 6.4% | > 200 to <= 500 km² |
| 10 | 34 | 0.4% | > 500 to <= 1,000 km² |
| 11 | 7 | 0.1% | > 1,000 km² |
New Bank Health
- Practice code:
Y02960 - Catchment area:
2.107km² - Bucket:
> 2 to <= 5 km² - Global England rank by smallest catchment area:
314/7650 - Global England area percentile, smaller-first:
4.1 - Put plainly:
95.9%of England practices have larger catchments than New Bank - Published Manchester extended England rank by smallest catchment area:
14/358 - Published Manchester extended England percentile, smaller-first:
3.9 - GTD England rank by smallest catchment area:
2/13 - GTD England percentile, smaller-first:
15.4
Scope note: All catchment areas in this report come from the one England catchment cache. The only scope changes here are whether New Bank is compared with all England catchments, the published Manchester-extended England pool, or just the GTD England subset.
Score Patterns By Catchment Size
- England practices with catchment area plus survey score:
6,051 - England practices with catchment area plus Google score:
5,974 - Overall survey vs catchment-area correlation is weak: Pearson
0.135, Spearman0.138 - Overall Google vs catchment-area correlation is also weak: Pearson
0.148, Spearman0.125 - Inside the more normal
<= 100 km²range, the relationship is close to flat: survey Pearson0.041, Google Pearson0.034
The visible lift is mostly in the large-catchment tail rather than across ordinary urban and suburban sizes. That makes this look more like a rurality or population-sparsity effect than a simple rule that bigger catchments directly produce better scores.
| Bucket | Practices with area | Survey mean | Survey >= 75% | Google mean | Google >= 4.0 |
|---|---|---|---|---|---|
| <= 1 km² | 69 |
77.2 |
59.7% |
3.15 |
22.1% |
| > 1 to <= 2 km² | 149 |
75.9 |
56.4% |
3.20 |
19.3% |
| > 2 to <= 5 km² | 570 |
76.4 |
59.4% |
3.19 |
21.7% |
| > 5 to <= 10 km² | 805 |
75.2 |
55.2% |
3.13 |
19.0% |
| > 10 to <= 20 km² | 1,158 |
75.7 |
56.0% |
3.09 |
17.0% |
| > 20 to <= 50 km² | 1,423 |
76.3 |
57.8% |
3.12 |
17.9% |
| > 50 to <= 100 km² | 795 |
77.1 |
62.0% |
3.21 |
20.3% |
| > 100 to <= 200 km² | 655 |
80.8 |
72.4% |
3.49 |
31.2% |
| > 200 to <= 500 km² | 414 |
80.9 |
70.9% |
3.55 |
35.0% |
| > 500 to <= 1,000 km² | 20 |
82.9 |
75.0% |
3.56 |
31.6% |
| > 1,000 km² | 4 |
80.2 |
75.0% |
3.48 |
25.0% |
GTD England Practices
| Practice | Code | Area | Bucket | England rank | England percentile |
|---|---|---|---|---|---|
| Charlestown MD | Y02325 |
1.986 km² |
> 1 to <= 2 km² | 290 / 7650 |
3.8 |
| New Bank Health | Y02960 |
2.107 km² |
> 2 to <= 5 km² | 314 / 7650 |
4.1 |
| Simpson Medical Practice | Y02520 |
3.722 km² |
> 2 to <= 5 km² | 705 / 7650 |
9.2 |
| Droylsden Medical Practice | Y02663 |
6.385 km² |
> 5 to <= 10 km² | 1326 / 7650 |
17.3 |
| Gordon Street Medical Centre | P89011 |
6.497 km² |
> 5 to <= 10 km² | 1353 / 7650 |
17.7 |
| The Smithy Surgery | P89602 |
7.897 km² |
> 5 to <= 10 km² | 1629 / 7650 |
21.3 |
| Mossley Medical Practice | P89612 |
10.42 km² |
> 10 to <= 20 km² | 2148 / 7650 |
28.1 |
| Hattersley Group Practice | P89013 |
11.65 km² |
> 10 to <= 20 km² | 2381 / 7650 |
31.1 |
| Ashton Gp Service | Y02586 |
12.24 km² |
> 10 to <= 20 km² | 2450 / 7650 |
32.0 |
| Guide Bridge Medical Practice | Y02713 |
12.83 km² |
> 10 to <= 20 km² | 2534 / 7650 |
33.1 |
| City Health Centre | Y02849 |
23.67 km² |
> 20 to <= 50 km² | 3890 / 7650 |
50.8 |
| Millbrook Medical Practice | Y02936 |
27.83 km² |
> 20 to <= 50 km² | 4237 / 7650 |
55.4 |
| Lindley House Health Centre | Y02875 |
71.12 km² |
> 50 to <= 100 km² | 5885 / 7650 |
76.9 |
Bucket Summaries
Bucket 1: <= 1 km²
- Members:
92 - Share of England catchments:
1.2% - Smallest member area:
0.001km² - Largest member area:
1.000km²
Bucket 2: > 1 to <= 2 km²
- Members:
199 - Share of England catchments:
2.6% - Smallest member area:
1.005km² - Largest member area:
1.994km²
Bucket 3: > 2 to <= 5 km²
- Members:
737 - Share of England catchments:
9.6% - Smallest member area:
2.006km² - Largest member area:
4.999km²
Bucket 4: > 5 to <= 10 km²
- Members:
1,037 - Share of England catchments:
13.6% - Smallest member area:
5.006km² - Largest member area:
9.997km²
Bucket 5: > 10 to <= 20 km²
- Members:
1,487 - Share of England catchments:
19.4% - Smallest member area:
10.00km² - Largest member area:
19.99km²
Bucket 6: > 20 to <= 50 km²
- Members:
1,806 - Share of England catchments:
23.6% - Smallest member area:
20.03km² - Largest member area:
49.95km²
Bucket 7: > 50 to <= 100 km²
- Members:
985 - Share of England catchments:
12.9% - Smallest member area:
50.05km² - Largest member area:
99.98km²
Bucket 8: > 100 to <= 200 km²
- Members:
779 - Share of England catchments:
10.2% - Smallest member area:
100.0km² - Largest member area:
199.9km²
Bucket 9: > 200 to <= 500 km²
- Members:
487 - Share of England catchments:
6.4% - Smallest member area:
200.1km² - Largest member area:
499.1km²
Bucket 10: > 500 to <= 1,000 km²
- Members:
34 - Share of England catchments:
0.4% - Smallest member area:
501.3km² - Largest member area:
931.2km²
Bucket 11: > 1,000 km²
- Members:
7 - Share of England catchments:
0.1% - Smallest member area:
1,062.8km² - Largest member area:
9,994.2km²
Refresh Notes
- These reports now sit on the rebuilt
40,506-review full-text index. - The most script-driven reports in the current set are:
clinical-harm-warning-signs-report.mdpractice-responses-report.mdonline-web-platform-experience-report.mddigital-appointment-practice-ranking-report.mddigital-platform-allocation-report.mdactivism-community-response-report.md- The more synthesis-heavy reports remain:
access-issues-report.mdreviews-corpus-overview-report.mdolder-vs-recent-complaints-report.mdstaff-and-clinician-experience-report.mdgtd-managed-practices-ppg-report.md