PgBulkInsert: Jpa Mapping quoting exception
I cant seem to get jpa mapping working.
I have an entity like this:
@Entity
data class Ticket(
@Id
val ticketId: String = UUID.randomUUID().toString(),
val jobId: String = UUID.randomUUID().toString(),
val entityId: String? = "",
val stepNo: Long = 0,
val opType: String,
@Enumerated(EnumType.STRING)
val status: TicketStatus = TicketStatus.NEW,
val createdBy: String? = "",
val resetBy: String? = "",
val scheduledStartTime: Timestamp = Timestamp.from(Instant.now()),
val scheduledEndTime: Timestamp? = null,
val startTime: Timestamp? = null,
val endTime: Timestamp? = null,
val retryable: Boolean = true,
val retryCount: Int = 0,
val cleanupStep: Int = 0,
val assignedTo: String? = "",
val externalRefId: String? = "",
@field:Column(length = 1000)
val ticketDetail: String? = "",
@CreationTimestamp
val createdTime: Timestamp = Timestamp.from(Instant.now()),
val updatedBy: String? = "",
val dryRun: Boolean = false,
val cancelRequested: Boolean = false,
val abortCode: String? = null,
val errorCode: String? = null
)
and when I do
return PostgreSqlUtils.tryGetPGConnection(jdbcTemplate.dataSource?.connection).map {
logger.info("Using postgres copy for insertion")
val mapping = JpaMapping(Ticket::class.java)
val bulkInsert = PgBulkInsert(mapping)
bulkInsert.saveAll(it, tickets)
true
}.orElse(false)
I get the following exception:
String index out of range: 0
java.lang.StringIndexOutOfBoundsException: String index out of range: 0
at java.lang.String.charAt(String.java:658)
at de.bytefish.pgbulkinsert.util.PostgreSqlUtils.requiresQuoting(PostgreSqlUtils.java:69)
at de.bytefish.pgbulkinsert.util.PostgreSqlUtils.quoteIdentifier(PostgreSqlUtils.java:50)
at de.bytefish.pgbulkinsert.mapping.AbstractMapping.lambda$getCopyCommand$9(AbstractMapping.java:363)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at de.bytefish.pgbulkinsert.mapping.AbstractMapping.getCopyCommand(AbstractMapping.java:364)
at de.bytefish.pgbulkinsert.PgBulkInsert.saveAll(PgBulkInsert.java:39)
at de.bytefish.pgbulkinsert.PgBulkInsert.saveAll(PgBulkInsert.java:46)
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 17 (10 by maintainers)
Commits related to this issue
- Issue #91 Start implementation by adding a ValueConverter — committed to PgBulkInsert/PgBulkInsert by bytefish 4 years ago
- Issue #91 Add Example Unit Test Remove ToBoolean / ToInt / ToFloat. Add Example Unit Test for Abstract Mapping. — committed to PgBulkInsert/PgBulkInsert by bytefish 4 years ago
- Issue #91 Increase to 7.0.0 Removes the JpaMapping. Adds a FAQ Section in README regarding java.sql.Timestamp conversion. Bumps version to 7.0.0 and updates the README accordingly. — committed to PgBulkInsert/PgBulkInsert by bytefish 4 years ago
Yes and there is one of the Problems already. I really regret adding a JpaMapping. The Probability is way too high, that the default assumptions (
intin java ->int4in Postgres) fail to hold. This means you would have to pollute your mappings with a million explicitPgDataTypeannotations… leaving users like you wondering why use it at all then?Just the same with the DateTime. What should it be mapped to by default? TimestampTz or Timestamp?
You are totally right, most of this can be solved by reading the actual database schema using Hibernate, but I don’t think I can implement it. There are also limits to my skills. This makes me really think: I’ll drop the
JpaMapping<>and make a simplified version as a recipe in the README.Just because I cannot get it easily working as a Drop-In.
Here is a dump of the schema from pg
As for the Not Enough Data Left Error, it has mostly to do with a Postgres Type mismatch, let’s say you are using an Integer in Java and want to map to something like an
int2in Postgres. I assume yourretryCountisn’t a full-blownint4in Java.That’s why I added the
@PostgresDataType(...)annotation, you you could map explicitly what data type the property maps to.So could you share DDL for creating the SQL Table ?