You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Gentics Mesh Version, operating system, or hardware.
v0.10.1
Problem
During the node migration I get a lot of "java.lang.OutOfMemoryError: GC overhead limit exceeded" Exceptions in my logs. The migration which lead to these Errors was migrating a microschema used in ~27.000 Nodes.
12:49:05.589 [Nostalgic Ferrothorn] INFO [vert.x-worker-thread-10] [JobImpl.java:233] - Processing job {251075d328df49039075d328df590336}
12:51:20.685 [Nostalgic Ferrothorn] ERROR [vert.x-worker-thread-2] [OrientDBDatabase.java:769] - Error handling transaction
com.gentics.mesh.core.rest.error.NotModifiedException: null
12:52:00.881 [Nostalgic Ferrothorn] ERROR [vert.x-worker-thread-5] [OrientDBDatabase.java:769] - Error handling transaction
com.gentics.mesh.core.rest.error.NotModifiedException: null
12:55:12.994 [Nostalgic Ferrothorn] ERROR [vert.x-worker-thread-6] [OrientDBDatabase.java:769] - Error handling transaction
com.gentics.mesh.core.rest.error.NotModifiedException: null
13:09:55.453 [Nostalgic Ferrothorn] ERROR [elasticsearch[Nostalgic Ferrothorn][[node-bebcd7977fdd4c6cbcd7977fdd6c6c96-d3c924e5384e41968924e5384e3196d7-ebd6ad37a
6554c5a96ad37a6556c5ab7-draft][0]: Lucene Merge Thread #94]] [Log4jESLogger.java:145] - [Nostalgic Ferrothorn] [node-bebcd7977fdd4c6cbcd7977fdd6c6c96-d3c924e538
4e41968924e5384e3196d7-ebd6ad37a6554c5a96ad37a6556c5ab7-draft][0] failed to merge
java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.apache.lucene.util.packed.DirectMonotonicWriter.<init>(DirectMonotonicWriter.java:56) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.util.packed.DirectMonotonicWriter.getInstance(DirectMonotonicWriter.java:136) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.codecs.lucene54.Lucene54DocValuesConsumer.addBinaryField(Lucene54DocValuesConsumer.java:448) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.codecs.lucene54.Lucene54DocValuesConsumer.addTermsDict(Lucene54DocValuesConsumer.java:478) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.codecs.lucene54.Lucene54DocValuesConsumer.addSortedField(Lucene54DocValuesConsumer.java:613) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.codecs.lucene54.Lucene54DocValuesConsumer.addSortedSetField(Lucene54DocValuesConsumer.java:653) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addSortedSetField(PerFieldDocValuesFormat.java:131) ~[mesh-server-0.10.1.jar:n
a]
at org.apache.lucene.codecs.DocValuesConsumer.mergeSortedSetField(DocValuesConsumer.java:736) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.codecs.DocValuesConsumer.merge(DocValuesConsumer.java:219) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:150) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:105) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4086) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3666) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94) ~[mesh-server-0.10.1.jar:
na]
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626) ~[mesh-server-0.10.1.jar:na]
13:09:55.453 [Nostalgic Ferrothorn] ERROR [elasticsearch[Nostalgic Ferrothorn][[node-bebcd7977fdd4c6cbcd7977fdd6c6c96-d3c924e5384e41968924e5384e3196d7-ebd6ad37a
6554c5a96ad37a6556c5ab7-published][2]: Lucene Merge Thread #97]] [Log4jESLogger.java:145] - [Nostalgic Ferrothorn] [node-bebcd7977fdd4c6cbcd7977fdd6c6c96-d3c924
e5384e41968924e5384e3196d7-ebd6ad37a6554c5a96ad37a6556c5ab7-published][2] failed to merge
java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.apache.lucene.util.packed.PackedLongValues$Builder.<init>(PackedLongValues.java:192) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.util.packed.DeltaPackedLongValues$Builder.<init>(DeltaPackedLongValues.java:59) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.util.packed.MonotonicLongValues$Builder.<init>(MonotonicLongValues.java:62) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.util.packed.PackedLongValues.monotonicBuilder(PackedLongValues.java:68) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.util.packed.PackedLongValues.monotonicBuilder(PackedLongValues.java:73) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.MultiDocValues$OrdinalMap.<init>(MultiDocValues.java:520) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.MultiDocValues$OrdinalMap.build(MultiDocValues.java:490) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.codecs.DocValuesConsumer.mergeSortedSetField(DocValuesConsumer.java:733) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.codecs.DocValuesConsumer.merge(DocValuesConsumer.java:219) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:150) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:105) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4086) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3666) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626) ~[mesh-server-0.10.1.jar:na]
13:10:40.908 [Nostalgic Ferrothorn] ERROR [elasticsearch[Nostalgic Ferrothorn][[node-bebcd7977fdd4c6cbcd7977fdd6c6c96-d3c924e5384e41968924e5384e3196d7-ebd6ad37a6554c5a96ad37a6556c5ab7-draft][3]: Lucene Merge Thread #96]] [Log4jESLogger.java:145] - [Nostalgic Ferrothorn] [node-bebcd7977fdd4c6cbcd7977fdd6c6c96-d3c924e5384e41968924e5384e3196d7-ebd6ad37a6554c5a96ad37a6556c5ab7-draft][3] failed to merge
java.lang.IllegalStateException: this writer hit an unrecoverable error; cannot complete merge
at org.apache.lucene.index.IndexWriter.commitMerge(IndexWriter.java:3472) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4234) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3666) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626) ~[mesh-server-0.10.1.jar:na]
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.packed.DirectMonotonicWriter.<init>(DirectMonotonicWriter.java:56) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.util.packed.DirectMonotonicWriter.getInstance(DirectMonotonicWriter.java:136) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.codecs.lucene54.Lucene54DocValuesConsumer.addBinaryField(Lucene54DocValuesConsumer.java:448) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.codecs.lucene54.Lucene54DocValuesConsumer.addTermsDict(Lucene54DocValuesConsumer.java:478) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.codecs.lucene54.Lucene54DocValuesConsumer.addSortedField(Lucene54DocValuesConsumer.java:613) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.codecs.lucene54.Lucene54DocValuesConsumer.addSortedSetField(Lucene54DocValuesConsumer.java:653) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addSortedSetField(PerFieldDocValuesFormat.java:131) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.SortedSetDocValuesWriter.flush(SortedSetDocValuesWriter.java:164) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.DefaultIndexingChain.writeDocValues(DefaultIndexingChain.java:163) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:99) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:422) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:503) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:615) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:424) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:286) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:261) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:251) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:137) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:154) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:176) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.search.ReferenceManager.maybeRefreshBlocking(ReferenceManager.java:253) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.index.engine.InternalEngine.refresh(InternalEngine.java:669) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.index.shard.IndexShard.refresh(IndexShard.java:665) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.action.admin.indices.refresh.TransportShardRefreshAction.shardOperationOnPrimary(TransportShardRefreshAction.java:65) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:657) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:287) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:279) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:378) ~[mesh-server-0.10.1.jar:na]
13:13:29.933 [Nostalgic Ferrothorn] ERROR [elasticsearch[Nostalgic Ferrothorn][[node-bebcd7977fdd4c6cbcd7977fdd6c6c96-d3c924e5384e41968924e5384e3196d7-ebd6ad37a6554c5a96ad37a6556c5ab7-published][4]: Lucene Merge Thread #98]] [Log4jESLogger.java:145] - [Nostalgic Ferrothorn] [node-bebcd7977fdd4c6cbcd7977fdd6c6c96-d3c924e5384e41968924e5384e3196d7-ebd6ad37a6554c5a96ad37a6556c5ab7-published][4] failed to merge
java.lang.OutOfMemoryError: GC overhead limit exceeded
13:13:31.068 [Nostalgic Ferrothorn] ERROR [elasticsearch[Nostalgic Ferrothorn][[node-bebcd7977fdd4c6cbcd7977fdd6c6c96-d3c924e5384e41968924e5384e3196d7-ebd6ad37a6554c5a96ad37a6556c5ab7-published][0]: Lucene Merge Thread #94]] [Log4jESLogger.java:145] - [Nostalgic Ferrothorn] [node-bebcd7977fdd4c6cbcd7977fdd6c6c96-d3c924e5384e41968924e5384e3196d7-ebd6ad37a6554c5a96ad37a6556c5ab7-published][0] failed to merge
java.lang.OutOfMemoryError: GC overhead limit exceeded
13:13:33.336 [Nostalgic Ferrothorn] ERROR [elasticsearch[Nostalgic Ferrothorn][[node-bebcd7977fdd4c6cbcd7977fdd6c6c96-d3c924e5384e41968924e5384e3196d7-ebd6ad37a6554c5a96ad37a6556c5ab7-published][3]: Lucene Merge Thread #95]] [Log4jESLogger.java:145] - [Nostalgic Ferrothorn] [node-bebcd7977fdd4c6cbcd7977fdd6c6c96-d3c924e5384e41968924e5384e3196d7-ebd6ad37a6554c5a96ad37a6556c5ab7-published][3] failed to merge
java.lang.OutOfMemoryError: GC overhead limit exceeded
13:16:53.723 [Nostalgic Ferrothorn] ERROR [elasticsearch[Nostalgic Ferrothorn][[node-bebcd7977fdd4c6cbcd7977fdd6c6c96-d3c924e5384e41968924e5384e3196d7-ebd6ad37a6554c5a96ad37a6556c5ab7-draft][2]: Lucene Merge Thread #98]] [Log4jESLogger.java:145] - [Nostalgic Ferrothorn] [node-bebcd7977fdd4c6cbcd7977fdd6c6c96-d3c924e5384e41968924e5384e3196d7-ebd6ad37a6554c5a96ad37a6556c5ab7-draft][2] failed to merge
java.lang.OutOfMemoryError: GC overhead limit exceeded
Afterwards the log is spammed with hundreds of entries like this:
3:20:48.397 [Nostalgic Ferrothorn] ERROR [elasticsearch[Nostalgic Ferrothorn][listener][T#2]] [ElasticSearchProvider.java:403] - Adding object {9fba280551114c0
bba28055111ec0bbe-de:node} to index failed. Duration 737907[ms]
org.elasticsearch.index.engine.IndexFailedEngineException: Index failed for [node#9fba280551114c0bba28055111ec0bbe-de]
at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:459) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:605) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.index.engine.Engine$Index.execute(Engine.java:836) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:236) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:157) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:66) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:657) ~[mesh-server-0.10.1.
jar:na]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [mesh-server-0.10.1.jar:na]
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.j
ava:287) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.j
ava:279) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:378) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [mesh-server-0.10.1.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
at org.apache.lucene.index.DocumentsWriter.ensureOpen(DocumentsWriter.java:197) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:407) ~[mesh-server-0.10.1.jar:na]
at org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1318) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.index.engine.InternalEngine.innerIndex(InternalEngine.java:536) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:454) ~[mesh-server-0.10.1.jar:na]
... 15 common frames omitted
13:20:48.398 [Nostalgic Ferrothorn] ERROR [elasticsearch[Nostalgic Ferrothorn][listener][T#2]] [ElasticSearchProvider.java:403] - Adding object {43452796f565431f852796f565031feb-de:node} to index failed. Duration 715272[ms]
org.elasticsearch.action.UnavailableShardsException: [node-bebcd7977fdd4c6cbcd7977fdd6c6c96-d3c924e5384e41968924e5384e3196d7-ebd6ad37a6554c5a96ad37a6556c5ab7-draft][1] primary shard is not active Timeout: [1m], request: [index {[node-bebcd7977fdd4c6cbcd7977fdd6c6c96-d3c924e5384e41968924e5384e3196d7-ebd6ad37a6554c5a96ad37a6556c5ab7-draft][node][43452796f565431f852796f565031feb-de], source[{"uuid":"43452796f565431f852796f565031feb","editor":{"uuid":"88fd5cfe0b9741b7bd5cfe0b9781b7fa"},"edited":"2017-09-12T10:50:38Z","creator":{"uuid":"88fd5cfe0b9741b7bd5cfe0b9781b7fa"},"created":"2017-09-05T09:22:42Z","project":{"name":"kofl","uuid":"bebcd7977fdd4c6cbcd7977fdd6c6c96"},"tags":{"name":[],"uuid":[]},"tagFamilies":{},"parentNode":{"uuid":"e0f22c8ef5004a94b22c8ef500ba9414"},"language":"de","schema":{"name":"address","uuid":"2f568d4ad37f4a89968d4ad37fca89e2","version":"1.0"},"fields":{"regionName":"Dornbach","localityPart":"Wien,Hernals","parkingDuration":"3 h","houseNumberTo":{"microschema":{"name":"housenumber","uuid":"dea3da3248424a83a3da3248426a8383"},"fields-housenumber":{"number":0}},"parkingFrom":"Mo.-Fr. (werkt.) v. 9-19 Uhr","municipalityName":"Wien","wgs84_y":48.234,"cadastralCommunityNumber":"1401","wgs84_x":16.303,"streetNameSuffix2":{"microschema":{"name":"address_suffix","uuid":"1d05753dd0914f6485753dd0916f64e2"},"fields-address_suffix":{"number":0}},"houseNumberFrom":{"microschema":{"name":"housenumber","uuid":"dea3da3248424a83a3da3248426a8383"},"fields-housenumber":{"number":219,"letter":"b"}},"streetNameSuffix1":{"microschema":{"name":"address_suffix","uuid":"1d05753dd0914f6485753dd0916f64e2"},"fields-address_suffix":{"number":0}},"streetName":"Czartoryskigasse","epsg1994_y":430261,"pacs":[101390536],"epsg1994_x":620527,"postcodeNumber":1170,"cadastralCommunityName":"Dornbach","postcodeName":"Wien","localityId":17239},"displayField":{"key":"streetName","value":"Czartoryskigasse"}}]}]
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:614) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:474) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onTimeout(TransportReplicationAction.java:576) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:236) ~[mesh-server-0.10.1.jar:na]
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:816) ~[mesh-server-0.10.1.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
After stopping mesh and restarting it (with -Xmx2048m instead of -Xmx512m) the migration completed in a few seconds: (Before it took 20min before the first OutOfMemoryError appeared)F
13:33:30.437 [Nostalgic Ferrothorn] INFO [vert.x-worker-thread-3] [JobImpl.java:233] - Processing job {251075d328df49039075d328df590336}
13:33:41.415 [Nostalgic Ferrothorn] INFO [vert.x-worker-thread-3] [MigrationStatusHandlerImpl.java:204] - Migration completed without errors.
During mircoschema migration I also don't see any progress in the logs - so you don't really know if mesh is still doing something...
During schema migration I do see the batches - this would be nice for microschema migration too
13:33:46.613 [Nostalgic Ferrothorn] INFO [vert.x-worker-thread-3] [JobImpl.java:233] - Processing job {73cffb528ed24d3c8ffb528ed25d3c32}
13:33:46.613 [Nostalgic Ferrothorn] INFO [vert.x-worker-thread-3] [JobImpl.java:276] - Handling node migration request for schema {2f568d4ad37f4a89968d4ad37fca89e2} from version {ebd6ad37a6554c5a96ad37a6556c5ab7} to version {bf14bc22044d414a94bc22044d614a05} for release {d3c924e5384e41968924e5384e3196d7} in project {bebcd7977fdd4c6cbcd7977fdd6c6c96}
13:33:47.927 [Nostalgic Ferrothorn] INFO [vert.x-worker-thread-3] [NodeMigrationHandler.java:116] - Migrated containers: 0
13:33:50.599 [Nostalgic Ferrothorn] INFO [vert.x-worker-thread-3] [NodeMigrationHandler.java:116] - Migrated containers: 50
13:33:53.374 [Nostalgic Ferrothorn] INFO [vert.x-worker-thread-3] [NodeMigrationHandler.java:116] - Migrated containers: 100
13:33:55.855 [Nostalgic Ferrothorn] INFO [vert.x-worker-thread-3] [NodeMigrationHandler.java:116] - Migrated containers: 150
13:33:58.108 [Nostalgic Ferrothorn] INFO [vert.x-worker-thread-3] [NodeMigrationHandler.java:116] - Migrated containers: 200
Gentics Mesh Version, operating system, or hardware.
Problem
During the node migration I get a lot of "java.lang.OutOfMemoryError: GC overhead limit exceeded" Exceptions in my logs. The migration which lead to these Errors was migrating a microschema used in ~27.000 Nodes.
Afterwards the log is spammed with hundreds of entries like this:
A backup of the current data directory can be downloaded here: https://filebox.apa-it.at/index.php/s/uDtBBtNuxLdxqrX
The text was updated successfully, but these errors were encountered: