amplify-cli: Elasticsearch / searchable updates broken for "old" Amplify installations
@SwaySway I believe PR #3391 which was discussed in #3359 broke all our Elasticsearch indexes.
Before Amplify v4.16.1 (or a few versions before) a record would land in ES something like this:
{
.... // more
"_index": "member",
"_type": "doc",
"_id": "8f5ea23e-40e3-4bad-7cd9-35e36b4cda26",
"_score": 32.273067,
.... // more
}
After upgrading to 4.16.1 it will land like this:
{
.... // more
"_index": "member",
"_type": "doc",
"_id": "id=8f5ea23e-40e3-4bad-7cd9-35e36b4cda26", <-------- note id= prepended in the value
"_score": 32.273067,
.... // more
}
_id is now prepended with the key field where it used to be just the value.
The result is very damaging and completely broke our entire application as ES will create a new document for any update to an existing item in the index. The result of that is “duplicate” items all over our UI listing, one for the old outdated record and one for the new one.
How can this be committed and enter Amplify without a HUGE warning AND a migration method documented? Do you think we are just playing around with non-production applications still?
After updating to latest CLI last night, I’m sitting here pulling my hair out trying to figure out what is happening for many hours today and by using Kibana I was able to get a closer look at the index and realize this “simple” change which has huge effects on all existing users of a @searchable.
Do you have any suggestions for me @SwaySway? Delete the entire index and then code a migration job for ALL DynamoDB tables that are using @searchable to have every single item updated and re-triggered so they get re-created in Elasticsearch seems like a lot of work.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 18 (7 by maintainers)
Commits related to this issue
- fix(graphql-elasticsearch-transformer): fix es lambda on duplicate docs fixed document to remove duplicates based on changes introduced supporting @key re #3602 #3705 — committed to SwaySway/amplify-cli by SwaySway 4 years ago
- fix(graphql-elasticsearch-transformer): fix duplicate records in es lambda (#3712) * fix(graphql-elasticsearch-transformer): fix es lambda on duplicate docs fixed document to remove duplicates bas... — committed to aws-amplify/amplify-cli by SwaySway 4 years ago
Thanks for your feedback @SwaySway and sorry for being direct, but you guys need to a ******* grip and start respecting your
userscustomers, or Amplify will be abandoned by anyone who needs anything else than a playground/mockup app.I have done web development for 22 years now and I have never experienced more platform instability in relation to new releases than I have on Amplify and you largely operate like the early 2000s in terms of quality, transparency, and documentation.
This is still wrong:
You have no policy on releasing breaking changes. You fix one broken feature, and in the process break every single app using one of the tentpole features of the platform the
@searchabledirective. Changing the key ES stores the data should not surprise anyone with just a tiny bit of insight, that this will be a BREAKING CHANGE and should be treated as such. Using one more day to make a warning and add the 4 lines of code needed to fix backward compatibility should not be something you’d even consider not doing to get a bugfix out.You still have no reasonable changelog. It may have improved slightly because it’s no longer just repeating the same changes on new versions, but it’s still not outlining changes in a decent fashion. Elasticsearch changes are not mentioned in any of the most recent versions, but if it would have, then I may have reviewed that part — and maybe not I — but someone else would have spotted that this is a breaking change (though that should have been spotted by your team, that is why you have reviews of code right?).
There seems to be no project management on Amplify.
Issues in this repository is at an all-time high and keeps increasing. It seems there’s just not enough resources to keep up and that is worrying for companies that rely on the stability of this platform.
I will now end up spending two long dev days to first find the reason for and then correct a regression that should never happen. And in addition to the development cost and delay on other missing deadlines in our app, I will pay for AWS resources for doing these updates. I’m happy I don’t have millions of items in DynamoDB, but if I did, this would be very costly to correct and the options outlined may not be feasible.