#SwipeToSuccess | bitgrit
Competition banner

#SwipeToSuccess

Optimize Atrae’s profile-matching algorithm for their professional networking app, yenta.

Atrae, Inc.
1697 Participants
996 Submissions
bitgrit facebook
Brief
yenta is a professional networking app launched by Japanese startup Atrae that uses artificial intelligence (AI) to optimize profile matching for its users. In hosting this competition, Atrae strives to improve the AI it uses on its platform to enable yenta users to make new, valuable connections and expand their networks. The goal of this competition is to optimize yenta’s matching algorithm by predicting the compatibility of two app users. This ensures that the app recommends the most relevant profiles to each user, and winning algorithms submitted for this competition may be used to optimize yenta’s profile recommendation algorithm. With your support, we aim to help people around the world make beneficial connections that last. ABOUT YENTA The goal of this contest is to improve the matching algorithm on Atrae’s professional networking app called yenta. Go to https://yenta.page.link/install_fbg to download and learn more about the yenta app. *The yenta app is only available in Japan and India. yenta is a matching app for building professional connections. Once you create a profile on the app, its native AI algorithm searches other users' profiles to find who you might be interested in and like to meet with. You swipe right on profiles you're interested in, and if the other person is also interested and swipes right on your profile, you can message, meet up, and submit a review for each other. We encourage you to download the app to expand your network, get insider tips on new job opportunities, and gain insights that could give you an edge in this competition!
Prizes
  • 1st Prize ($ 5000)

  • 2nd Prize ($ 3000)

  • 3rd Prize ($ 1000)

  • 4th Prize ($ 600)

  • 5th Prize ($ 400)

Timeline
  • 24 Aug 2020 Competition Starts
  • 31 Oct 2020 Competition Ends & Private Leaderboard Released
  • 06 Nov 2020 Final Source Code Submission Deadline
  • 25 Nov 2020 Winners Announced (Subject to change based on submission results)
Data Breakdown
The goal of this competition is to predict the level of compatibility of two given users to improve the profile recommendation algorithm for yenta (link to download: https://yenta.page.link/install_fbg). For this purpose, we classify the level of compatibility between user A and user B into 4 categories: • No Match = 0: At least one of either user A or user B swiped left on the other, meaning there is no possibility of a match. • Match = 1: Both user A and user B swiped right on each other and matched. • Matched and met but unfavorable review = 2: Both user A and user B swiped right on each other and matched, then met. After the meeting, user A gave user B a review of 1-3 out of 5 (an “unfavorable” review). • Matched and met and favorable review = 3: Both user A and user B swiped right on each other and matched, then met. After the meeting, user A gave user B a review of 4-5 out of 5 (a “favorable” review). To build this model, we provide 2 different types of data subsets: user data and interaction data. Note that all of the data is anonymized through the use of alias IDs and multi-step vectorization models to ensure that user privacy is protected. IDs with low frequency are grouped into a category labelled “other” with an ID of 999999. I. User data: These files are connected through the user_id column (e.g. 41245) ・ user_ages.csv: # user_id: user ID # age: user age (in years) ・ user_educations.csv: # user_id: user ID # school_id: school ID # degree_id: degree ID (just for some users) ・ user_works.csv: # user_id: user ID # company_id: company ID # industry_id: company's industry ID (please note: one company can have multiple values; also, this column is user-selected, so values are not necessarily tied to company ID, which means that the same company ID can have different values for different users) # over_1000_employees: variable indicating if the company has over 1000 employees or not (please note: this column is user-selected, so values are not tied to company ID, which means the same company ID can have different values for different users) ・ user_skills.csv: # user_id: user ID # skilld_id: skill ID ・ user_strengths.csv: # user_id: user ID # strength_id_x: the number of votes that the user has received as review from other users ・ user_purposes.csv: # user_id: user ID # purpose_id_x: whether the user marked "x" as a reason for using the app or not ・ user_self_intro_vectors_300dims.csv: # user_id: user ID # num_char: number of characters in the user's self-introduction text # num_url: number of URLs in the user's self-introduction text # num_emoji: number of emojis used in the user's self-introduction text # self_intro_x: value for dimension "x" of the user's vectorized self-introduction text (out of 300 dimensions) ・ user_sessions.csv: # user_id: user ID # timestamp: session timestamp II. Interaction data: These files are indexed by from-to user_id pairs (e.g. 12345-52462) ・ interaction_review_comments_300dims.csv: # from-to: user ID of reviewer-user ID of reviewed # review_comment_x: value for dimension "x" of the review comment (out of 300 dimensions) ・ interaction_swipes.csv: # from-to: user ID of swiper-user ID of target # timestamp: timestamp of the swipe event # swipe_status: result of the swipe (-1 = not interested, 1 = interested) ・ interaction_review_strengths.csv: # from-to: user ID of reviewer-user ID of reviewed # strength_id: ID of the strength evaluated by the reviewer III. Train and test files: In order to train the model, we provide a train.csv file with pairs of user IDs and their corresponding scores ・ train.csv: # from-to: user ID of scorer-user ID of target # score: compatibility score ID (0-3) ・ test.csv: # from-to: user ID of scorer-user ID of target (to be predicted) The solution file to be provided should follow this format: the from-to IDs should be the same IDs contained in the test.csv file, and they must be in the same order. IMPORTANT NOTE: Score values on the submission file should be formatted with at least one decimal (e.g. 0.0 instead of 0, 1.0 instead of 1) or the system will not be able to score it properly. • Submission.csv from-to, score 6280229-6293525, 1.0 670384-50085, 2.0 2271906-4685859, 1.0 ... NOTE: The maximum number of submissions that can be made per day are 3 submission.csv files. A few minutes after submitting your solution, you will be able to see the accuracy of your solution on the submission page over a subset of the test data. Final competition results will be based on the Private Leaderboard results, and the winner will be the user at the top of the Private Leaderboard.
FAQs
Who do I contact if I need help regarding a competition?
If you have any inquiries, please contact us at [email protected]
How will I know if I’ve won?
We will send an email to the top five winners of the competition to inform you of your ranking and begin processes for delivering your prize.
How can I report a bug?
If you discover a bug, please send an email to [email protected] with a description and details about the bug. If possible, please include a screenshot of the bug as an attachment to the email.
If I win, how can I receive my reward?
Prizes can be delivered via PayPal, wire transfer, or other suitable methods. We understand that everyone prefers different payment methods, and we endeavor to accommodate your needs as best as possible depending on your location and our ability to do so.
Why is my score 0.00037?
The predictions on your submission.csv file should be formatted with at least one decimal (e.g. 0.0 instead of 0, 1.0 instead of 1) as stated in the Guidelines, otherwise the accuracy will not be correctly displayed. We apologize for this inconvenience, and we appreciate your understanding as we work on supporting competition solutions following different formats.
Rules
1. This competition is governed by the following Terms of Participation. Participants must agree to and comply with these Terms to participate. 2. The maximum limit of submissions per day is 3. If users want to submit new files, they will have to wait until the following day to do it. Please keep this in mind when uploading a submission.csv file. 3. A competition prize will be awarded after we have received, successfully executed, and confirmed the validity of both the code and the solution. Once winners are announced and our team reaches out to them, the winners must provide the following by November 6, 2020 in order to avoid disqualification. a. All source files required to preprocess the data b. All source files required to build, train and make predictions with the model using the processed data c. A requirements.txt (or equivalent) file indicating all the required libraries and their versions as needed d. A ReadMe file containing the following: • Clear and unambiguous instructions on how to reproduce the predictions from start to finish including data pre-processing, feature extraction, model training and predictions generation • Environment details regarding where the model was developed and trained, including OS, memory (RAM), disk space, CPU/GPU used, and any required environment configurations required to execute the code • Clear answers of the following questions: - Which data files are being used? - How are these files processed? - What is the algorithm used and what are its main hyperparameters? - Any other comments considered relevant to understanding and using the model In the event these items are not provided or do not meet the minimum requirements listed above, we will not be able to award the winner with their respective prize. 4. If two or more participants have the same score on the leaderboard, the participant who submitted the winning file first will be considered the winner. 5. The dataset used for this competition is derived from real-world data that has been anonymized, so please do not use any models developed utilizing this data on similar matching services. 6. If you have any inquiries about this competition, please don’t hesitate to reach out to us at [email protected]. We ask that users do not contact Atrae directly.
New Submission
Step 1
Upload or drop your file
Upload or drop your csv file here.
Your submission should be in .csv format.
Step 2
Description
Briefly describe your submission (400 characters or less)

You have exceeded the number of allowed submissions for this competition.
3 submission(s) left

Thanks for your submission!

We'll send updates to your email. You can check your email and preferences here.
My Submissions
.