The most recent content from our members.
My company just went through an acquisition/carve out and we need to import data from the current amplitude instance to a fresh one. The challenge I’m running into is how to handle the crazy amount of data. The json exports are far larger than the 20mb upload limits, and each month is hundreds of mb. We need to backfill a…
Hi all, I set up a BigQuery connection to ingest custom event data into our Amplitude project. I can see the ingestion is running correctly, and the event counts on the BQ Connection show the expected number of events (19k for 8/29/2023): However, when building a segmentation chart for the same event, the count for event…
Hi, is there a way to seed my Amplitude account only using csv data? Right now the setup wizard is trying to make me connect an API source, but I just want to import some event data from a csv file and then analyze it with Amplitude. I’m not even able to access the dashboard at all because the setup wizard wants an API…
I have added all the BigQuery tables as sources into Amplitude. I can see events being ingested from Bigquery as well under the ingested events but I can’t see the events in `Events` section? While there is not ingestion error and even if there is one, I can see 7 out of 10 events being ingested, but no Events found on the…
I am new to Amplitude, I have configured a source to GCS for event data import. It imported first time when I have configured, but later whenI am pushing another files to GCP bucket with same extension and format it is not reading that file.
I imported a cohort from a CSV file with x k rows of user IDs (email addresses). Amplitude is now showing an amount of significantly less users (cut by a couple thousand) in “population over time”. Why doesn’t this compute to the original row number in the CSV file? Where is the exclusion of user IDs happening? Thanks.
It looks like you're new here. Sign in or register to get started.