134631fdee
* prepare to add $delete-expunge operation that will create a spring batch job * Add operation * Wire up jpa provider. Begin with failing test. * Copy/paste bulk import job as a starting point. FIXME with proposed design * delete expunge job parameter validation with test * implemented reader stubbed processor, writer * wip for master merge * started implementing reader * started implementing reader * working with stubs * happy path batch delete expunge is done * Provider done but test not passing. Guessing batch infrastructure not running in that test. * IT test works now * add reader test * Converted delete _expunge=true to use new batch job * DeleteExpungeDaoTest passes * Fix test * Change batch size to integer * rename search count to batch size * Make delete expunge partition aware * updated docs * pre-review cleanup * change log * add partition id to SystemRequestDetails * Make RequestPartitionId serializable * Change delete expunge provider to use partition id instead of tenant name * fix tests * test pointcut gets called * assert on pointcut calls * Add resource type to STORAGE_PARTITION_SELECTED pointcut * bump hapi-fhir version move expunge provider parameters from JpaConstants to ProviderConstants * bump hapi-fhir version * copyrights * restore deleteexpungeservice for mdm * restore deleteexpungeservice for mdm * fix test * public constants * convert instant to date * Moved expunge constants to ProviderConstants * final review * disabling InMemoryResourceMatcherR5Test.testNowNextMinute() to see if I can get a clean test run * fix tests * fix tests * fix tests * fix tests * review feedback * review feedback * review feedback * review feedback * review feedback * review feedback * improve logging * bump version * version bump * recovering from failed merge * unzip RequestListJson per Gary's suggestion. I didn't want to do it at first, but as usual Gary was right. * fix serialization |
||
---|---|---|
.. | ||
src | ||
.gitignore | ||
derby_maintenance.txt | ||
pom.xml |