# What The Review Corpus Shows

This is a fresh read across the whole indexed Google review corpus, not just the access slice.

The corpus now contains `40,506` reviews across `404` practices. It is a local fulltext read of review text, not a survey and not NLP. That matters because Google reviews are messy, self-selecting, and emotionally uneven. But that is also why they are useful. They show where care breaks badly enough, or works well enough, that people feel pushed to say so in public.

## The Big Shape

The first thing the larger corpus shows is still the same thing the smaller one showed: the middle is tiny.

- `12,513` reviews, `30.9%`, are `1` star
- `1,102` reviews, `2.7%`, are `2` stars
- `910` reviews, `2.2%`, are `3` stars
- `2,405` reviews, `5.9%`, are `4` stars
- `23,576` reviews, `58.2%`, are `5` stars

So patients still do not mostly write like survey respondents. They write when they feel sharply let down, or when they feel someone really helped them.

That split is visible at practice level too:

- `388` of `404` practices have both low-star and high-star reviews
- only `5` have low-star reviews without any high-star reviews
- only `11` have high-star reviews without any low-star reviews

That means the live picture is rarely just "good practice" or "bad practice". Most places look mixed and uneven from the patient side.

The GTD-managed slice is still much harsher than the wider field. In that subset, `627` of `830` reviews, `75.5%`, are `1` or `2` stars, while only `185`, `22.3%`, are `4` or `5` stars.

## What Patients Mostly Complain About

Access is still the biggest single story, but the wider corpus makes it clearer that it is only the front end of a longer complaint.

The refreshed access report found:

- `18,321` reviews, `45.2%` of all reviews, mentioning a main access route or access-linked follow-through issue
- `7,033` reviews, `17.4%`, using stronger complaint-shaped access language
- `5,080` low-star reviews, `12.5%` of all reviews and `37.3%` of all low-star reviews, sitting inside that stricter access basket

But once you step back from access on its own, three other negative themes keep rising to the surface.

### Staff attitude and respect

Roughly-worded complaints about staff tone and treatment remain one of the biggest non-access themes.

A refreshed plain-language pass found `3,647` reviews using staff-attitude language of this kind, including `3,295` low-star reviews.

Patients are not only saying staff were rude. They are saying the rudeness matters because it comes at the point where they are already dependent on the service.

Examples:

> "Incredibly rude and unhelpful."  
> Sarah Malone, `Lostock Medical Centre`, `a year ago`

> "Rude, unhelpful, ignorant and condescending receptionists"  
> Richard Seddon, `Dalefield Surgery`, `5 months ago`

This is one of the clearest places where the reviews go beyond a survey tick-box. "Not helpful" in survey language becomes very concrete here.

### Follow-through, admin reliability, and whether anything actually happens

The wider corpus also makes follow-through look more central than it first did.

A refreshed pass found `2,720` reviews mentioning referrals, results, prescriptions, callbacks, chasing, or no response, including `1,703` low-star reviews.

These reviews are often less dramatic in tone than the pure access complaints, but they can be just as damaging. The patient gets through one barrier and then the trail goes cold.

Examples:

> "Doctor forgot to do a referral ... These failures has meant that I 'lost' 5 months"  
> Eileen Garland, `Chorlton Family Practice`, `2 years ago`

> "I have been trying to get a referral since February ... delay my referral for a few months."  
> R M, `Olive Family Practice`, `2 years ago`

The reviews make this feel like one joined-up problem: not knowing whether the practice will actually carry something through.

### Clinical trust and safety

This is smaller than access or staff tone, but it is the point where the stakes become harder to dismiss as mere customer-service dissatisfaction.

The refreshed clinical-harm scan flagged `872` low-star reviews, `2.2%` of all reviews and `6.4%` of low-star reviews, with stronger clinical-failure language. Within that:

- `57` mention misdiagnosis or wrong diagnosis
- `34` mention wrong or unsafe medication
- `363` mention hospital or urgent escalation
- `190` mention severe outcome or condition terms

Examples:

> "Misdiagnosed earlier in the year resulting in complications that required further treatment and discomfort"  
> Lachlan Pollock, `The Alexandra Practice`, `2 years ago`

> "Misdiagnosed for over TWO years because doctors refused face to face appointments and wouldn't listen."  
> Nicola Skinkis, `St. Andrew's House Surgery`, `2 years ago`

This is a major gap between the review corpus and the national survey. Reviews show a part of patient experience that the survey only reaches indirectly, if at all.

### Digital access is now a clear corpus layer

The refreshed digital work makes one more thing clear: the online front door is no longer a side note.

The current digital pass finds `2,586` reviews, `6.4%` of the whole corpus, with a recognisable website, app, online-form, or named-platform signal. Most of that is still generic website/form/app language rather than product naming, but it is now big enough to rank practices and compare patterns.

The more specific digital appointment pass finds `1,839` appointment-linked digital reviews across `291` practices:

- `831` mainly positive
- `902` mainly negative
- `106` mixed

That matters because the reviews are not only saying "digital exists". They are saying two very different things:

- when it works, it means same-day access, quick callbacks, and an easier route in
- when it fails, it means another blocked queue, another confusing handoff, or another reason patients feel shut out

## What Patients Praise

The positive side of the corpus is just as clear, and it is useful because it shows what good care looks like in ordinary patient language.

### Kind, listening, competent care

A refreshed positive pass found `11,880` reviews using language about helpful, caring, kind, listening, reassuring, professional, compassionate, or thorough care. `10,497` of those are high-star reviews.

When patients are happy, they often do not just say "good service". They say someone listened, believed them, explained things, reassured them, or sorted something properly.

Examples:

> "Warren was really good he listened to me and believed me when I told him about my back."  
> Andrea Gregory, `Manchester Integrative Medical Practice`, `4 months ago`

> "Dr Moran was attentive and listened."  
> Yasmin Warsama, `Manchester Integrative Medical Practice`, `a month ago`

### Friendly front desk staff still matter a lot

A second positive pass found `5,627` reviews using friendly, welcoming, lovely, or respectful front-desk language. `5,192` of those are high-star reviews.

That is the mirror image of the low-star reception problem. Reception is not a side issue in either direction. It is one of the main ways patients decide whether a practice feels human, usable, and safe.

### Good access is noticed when it works

The bigger corpus also makes a useful positive point. Patients absolutely do notice access when it works well.

Examples:

> "Managed to get same day appointment at 10:50 and even had blood test same day 11:30."  
> Sharon Wardle, `Pennine Medical Centre`, `10 months ago`

> "Using online form for appointment easy and obtained same day appointment"  
> alan ridge, `Chorlton Family Practice`, `8 months ago`

> "Dr. Singh was very helpful with my dermatologist referral"  
> R., `The Quays Practice`, `a week ago`

That last kind of praise matters. Patients do not only thank warmth. They thank systems and people who actually get something done.

## How Patients Write

The reviews do not read like survey responses. They read like people describing what happened to them.

Three features stand out more sharply in the bigger corpus.

### They write in chains, not categories

Survey questions split problems into neat boxes: phone contact, website contact, reception helpfulness, preferred clinician, overall experience.

Reviews usually tell a sequence:

1. could not get through
2. finally got through
3. was told nothing was left
4. was pushed online
5. got no reply
6. was spoken to badly
7. then had to chase a result, referral, or prescription

That chain is one of the biggest differences between the review corpus and the survey frame.

### The language is plain, blunt, and often hard-edged

Patients usually do not soften much. They use direct words like:

- rude
- awful
- appalling
- unhelpful
- disgusting
- useless

That plainness matters. It tells you how people interpret the service, not just what formally happened.

### Positive reviews are concrete too

Good reviews are often just as operational as bad ones. They say:

- I was listened to
- they got back to me quickly
- I got seen the same day
- reception were welcoming
- someone sorted the referral or prescription

So the corpus is useful for showing what patients want, not only what they hate.

### Some reviews are written for other patients, not just the practice

The refreshed activism/community pass adds another layer that does not show up well in cleaner survey work.

`1,625` reviews, `4.0%` of the corpus, contain some form of public-warning, regulator-escalation, review-about-review, authority-positioning, or community-framing language.

Most of these are not organised campaigning. They are lone reviewers trying to:

- warn other patients away
- tell people to de-register or complain
- point to the review page as evidence that this is not an isolated problem
- appeal to regulators, MPs, or complaint bodies

That matters because some reviews are written as public signals, not just as private complaints made visible.

## Where Reviews Go Beyond The National Patient Survey

The national GP patient survey does ask useful gateway questions. In the current survey set, that includes things like:

- how easy or difficult it is to contact the practice on the phone
- how easy or difficult it is to contact the practice using the website
- how easy or difficult it is to contact the practice using the NHS App
- how helpful the reception and administrative team are
- which online services people have used
- whether there is a preferred healthcare professional and how often patients get to see or speak to them

Those are useful questions. But the review corpus keeps adding things the survey cannot show well.

### Reviews show the route, not just the rating

The survey tells you whether contact felt easy. Reviews show which route patients tried, how many times they tried it, where it broke, and what they were told next.

### Reviews show emotional cost

The survey records difficulty or dissatisfaction. Reviews show anger, panic, humiliation, exhaustion, fear, and mistrust.

### Reviews catch exclusion and drop-off

Survey responses usually come from people who got far enough through the system to answer questions about contact or care. Reviews also include people describing being blocked at the door, bounced between routes, or giving up.

### Reviews join the stages together

The survey separates access, support, appointment quality, and overall experience. Reviews often describe them as one continuous failure.

### Reviews reach clinical-risk territory

The survey touches listening, time, and confidence. The reviews go further into misdiagnosis, delayed referrals, unsafe medication, hospital escalation, and near-miss language.

### Reviews also show what practices say back

The survey does not have a public-reply layer at all. The review corpus does.

The refreshed responses pass found `16,756` public practice responses in the corpus, with a sharp gap between reply rates to praise and criticism. That is useful because it lets the corpus show not only what patients say, but how practices publicly choose to answer, deflect, apologise, or stay silent.

## What This Corpus Helps Us Represent

If the point of using this corpus is to represent patient need more clearly than the survey usually can, the main lessons are now fairly plain.

### Access is only the start

Access is still the biggest theme, but patients do not stop there. They also talk about respect, clarity, follow-through, prescriptions, referrals, and whether they trust what they were told.

### Reception is not a side issue

In the reviews, reception is often the face of the practice. It is where patients feel helped, blocked, believed, doubted, respected, or dismissed.

### Patients want kindness and competence together

The positive reviews are not only about warmth. They are about warmth plus practical help. The negative reviews are not only about delay. They are about delay plus confusion, plus poor treatment, plus no confidence that the next step will happen.

### Mixed practices need closer reading

Because nearly every practice has both high and low reviews, the real question is often not "is this place good or bad?" It is:

- who gets through
- who gets stuck
- when does it work
- where does it break
- which failures keep repeating

### Digital routes now need reading alongside phone and reception

The newer corpus makes this much clearer than before. Access is no longer just a phone-and-reception story. For many patients it is now phone plus website plus app plus form plus callback logic, all joined together.

That means the digital layer is no longer optional context. It is part of the patient route.

## Bottom Line

Across the board, this bigger review corpus still says three main things.

First, patient experience is sharply polarised. Many people leave very happy. Many others leave very angry. Very few sit in the middle.

Second, access is the biggest theme, but not the only one. Staff attitude, weak follow-through, digital front-door experience, and clinical trust all keep returning in the review text.

Third, the reviews show patient need in a form the survey usually cannot: direct, event-based, operational, emotionally clear, and sometimes public-facing in the way patients warn each other or push for outside attention.

That is what makes them useful. They are not tidy. They are full of sequence, blame, gratitude, confusion, detail, and sometimes real fear. But that is exactly why they show things that cleaner instruments smooth away.
