realm-swift: EXC_BAD_ACCESS KERN_INVALID_ADDRESS realm::util::EncryptedFileMapping::read_barrier crash!!!
!!! MANDATORY TO FILL OUT !!!
Goals
no source code and configuration change. After update realm 5.0.2, crash occurs many times. Please help!
Expected Results
Actual Results
Crashed: com.apple.root.default-qos
0 Realm 0x10414eef8 realm::util::EncryptedFileMapping::read_barrier(void const_, unsigned long, unsigned long ()(char const*)) + 32
1 Realm 0x103e4eb78 realm::util::do_encryption_read_barrier(void const, unsigned long, unsigned long ()(char const), realm::util::EncryptedFileMapping_) + 8932
2 Realm 0x1040f0490 long long realm::ConstObj::get<long long>(realm::ColKey::Idx) const + 236
3 App 0x102ed7e80 syncError #1 (cβdone:) in CloudManager.handleError(π + 74 (CloudManager.swift:74)
4 App 0x102ed919c doHandleError #1 () in CloudManager.handleError(π + 150 (CloudManager.swift:150)
5 App 0x102fce850 thunk for @callee_guaranteed () > (@error @owned Error) + 4301809744 (<compilergenerated>:4301809744)
6 App 0x102ee4820 partial apply for thunk for @callee_guaranteed () > (@error @owned Error) + 4300851232 (<compilergenerated>:4300851232)
7 App 0x102ee6514 thunk for @callee_guaranteed () -> (@error @owned Error)partial apply + 4300858644
8 libswiftObjectiveC.dylib 0x1f35f7bbc autoreleasepool(invoking:) + 56
9 App 0x102ee4f3c partial apply for closure #1 in CloudManager.handleError(π + 4300853052 (<compiler-generated>:4300853052)
10 App 0x1031573ec thunk for @escaping @callee_guaranteed () > () + 4303418348 (<compilergenerated>:4303418348)
11 libdispatch.dylib 0x1bcbc69a8 _dispatch_call_block_and_release + 24
12 libdispatch.dylib 0x1bcbc7524 _dispatch_client_callout + 16
13 libdispatch.dylib 0x1bcb6fc60 _dispatch_queue_override_invoke + 952
14 libdispatch.dylib 0x1bcb7c438 _dispatch_root_queue_drain + 376
15 libdispatch.dylib 0x1bcb7cbf8 _dispatch_worker_thread2 + 124
16 libsystem_pthread.dylib 0x1bcc18b38 _pthread_wqthread + 212
17 libsystem_pthread.dylib 0x1bcc1b740 start_wqthread + 8
Steps to Reproduce
crash occurs randomly in only 5.0.x version.
Code Sample
func syncError(c: CloudError, o: Object?, done: (() -> Void)?) { var name: String = ββ var pred: NSPredicate? var params = String:Any? var update: Date? var check: Bool = true
if c.dataType == 0 { ///===> CRASH!!!!!!!!!
name = "Entry"
if let entry = o as? Entry {
pred = NSPredicate(format: "sid == %@", entry.sid)
params = entry.toCloudParams()
update = entry.update
}
} else if c.dataType == 1 {
name = "Day"
if let day = o as? Day {
pred = NSPredicate(format: "sid == %@", day.sid)
params = day.toCloudParams()
update = day.update
}
} else if c.dataType == 2 {
name = "ColorLabel"
if let colorLabel = o as? ColorLabel {
pred = NSPredicate(format: "sid == %@", colorLabel.sid)
params = colorLabel.toCloudParams()
update = colorLabel.update
}
check = false
}
let dataKey = c.dataKey
if let _pred = pred {
CloudQuery.fetch(self.cloud.privateDB, recordType: name, predicate: _pred, forCheck: check, done: { (ret, err) in
if err != nil {
completeError(err, dataKey: dataKey)
done?()
} else {
if let _ret = ret as? [CKRecord], !_ret.isEmpty, let first = _ret.first {
if let left = update, let right = first["update"] as? Date, left > right {
CloudQuery.update(self.cloud.privateDB, record: first, keyedValues: params, done: { (ret, err) in
completeError(err, dataKey: dataKey)
done?()
})
} else {
completeError(nil, dataKey: dataKey)
done?()
}
} else {
CloudQuery.insert(self.cloud.privateDB, recordType: name, keyedValues: params, done: { (ret, err) in
completeError(err, dataKey: dataKey)
done?()
})
}
}
})
} else {
completeError(nil, dataKey: dataKey)
done?()
}
}
func doHandleError() {
var deletes = Set<String>()
if let ces = Fetch.cloudError([:]), !ces.isEmpty {
for c in ces {
if c.syncOp == 100 || c.tryCount >= self.maxCloudTryCount {
deletes.insert(c.dataKey)
} else {
var o: Object?
if c.dataType == 0 { //entry
o = Fetch.entry(["sid":c.dataKey], all: true)?.first
} else if c.dataType == 1 { //day
o = Fetch.day(["sid":c.dataKey], all: true)?.first
}
todos.append((c, o))
}
}
}
for t in todos {
let group = DispatchGroup()
group.enter()
syncError(c: t.0, o: t.1) {
group.leave()
}
group.wait()
}
for d in deletes {
if let c = Fetch.cloudError(["dataKey":d])?.first {
Delete.cloudError(c)
}
}
}
let group = DispatchGroup()
let queue = DispatchQueue.global()
queue.async(group: group) {
autoreleasepool {
doHandleError()
}
}
group.notify(queue: queue) {
done?()
}
Version of Realm and Tooling
5.0.2
Realm framework version: ? 5.0.2
Realm Object Server version: ?
Xcode version: ? 11.5
iOS/OSX version: ? 13.4
Dependency manager + version: ?
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 3
- Comments: 18 (2 by maintainers)
It seems to fit this issue in Core: https://github.com/realm/realm-core/issues/3766
Looks like same here - https://github.com/realm/realm-cocoa/issues/6571
In our project we had this crash appear randomly on app launch.
When I had a look at our helpers, I saw that we were not using notifications as we should - we got
Resultsfrom background thread, then passed them wrapped inThreadSafeReferenceto main thread to subscribe there, so we will receive notifications on main thread. But notifications themselves offeredResultsin notification block.Moreover, as I started refactoring, I noticed new queue-confined notifications. I looked up the code for subscribing and noticed, that original
RealmfromResultswas saved there and also passed to the queue along with reference toResults.So I thought - maybe that caused the crash. I did not pass original
Realmto main thread, maybe after some time it got into the autorelease pool and theResultsI was passing got corrupted somehow. So when I was dereferencing them they were not backed by anything and I got the crash accessing wrong address.It was only a theory, so I decided to refactor helpers to use results passed along with changes. Refactoring helped with this crash, but introduced a new one - https://github.com/realm/realm-cocoa/issues/6559.
Then after some digging and more refactoring I think I got rid of that one too (see comments in the issue).
Maybe this info will help fixing these issues.