You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In most production environment scenarios, the rows in the source and target table is very large(say tens of millions of records),while the different rows are very small. However, PgCompare needs to first copy the data from the source and target to PG(divide origin columns into PK and non-PK columns) in order to find the small amount of different datas. Is pgcompare suitable for such cases , and what is the best practice in such cases? Is it reasonable to change its solution to in-memory comparison in the future for PgCompare?