yonilev commited on
Commit
b0aa46a
·
verified ·
1 Parent(s): 2f6db25

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -0
README.md CHANGED
@@ -127,6 +127,39 @@ The full analysis notebook is available here:
127
  👉 [Open The Notebook in Google Colab](https://colab.research.google.com/drive/1YxqN2Urjli1ToxtYO9LtUztXxb5SeEcb?usp=sharing)
128
  ---
129
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
130
  ## 👤 Author
131
 
132
  **Yonathan Levy**
 
127
  👉 [Open The Notebook in Google Colab](https://colab.research.google.com/drive/1YxqN2Urjli1ToxtYO9LtUztXxb5SeEcb?usp=sharing)
128
  ---
129
 
130
+ ## ❓ Questions & Answers
131
+
132
+ **Q1: Is the app market democratic — can any app go viral?**
133
+ No. 86% of apps released in 2020 never exceeded 10K installs. The market is winner-takes-all: less than 1% of apps reached the Viral tier (1M+ installs), and their median install count is roughly 1,000x that of Medium-tier apps.
134
+
135
+ **Q2: Does a higher rating mean more installs?**
136
+ Surprisingly, no. Rating and Installs show a weak negative linear correlation (r = −0.31). Viral apps attract polarising reviews — millions of users means more critics. Quality alone is not the driver; distribution and visibility are.
137
+
138
+ **Q3: Does monetization strategy matter?**
139
+ Yes — it's the strongest predictor in the dataset. Ad-based and Hybrid (ads + IAP) apps reach significantly higher install counts than Pure Free or Premium apps. The price barrier of Premium apps dramatically limits reach.
140
+
141
+ **Q4: Does having a rating at all matter?**
142
+ Yes, dramatically. Apps with a visible rating contain virtually all Medium and Viral tier apps. This reflects a chicken-and-egg dynamic: installs drive ratings, ratings drive visibility, visibility drives more installs.
143
+
144
+ **Q5: Does the Rating–Installs relationship hold across all categories?**
145
+ No. The global r = −0.31 masks very different dynamics per category. In Tools and Business, there is almost no relationship. In Entertainment and Games, high-install apps cluster at specific rating bands.
146
+
147
+ ---
148
+
149
+ ## 🔧 Key Decisions
150
+
151
+ | Decision | What I did | Why |
152
+ |----------|-----------|-----|
153
+ | **Subset to 2020** | Filtered 544,882 apps released in 2020 | Consistent 6–18 month measurement window for all apps |
154
+ | **Stratified sampling** | Cochran's formula → 9,436 rows, sampled proportionally by Category | Preserve category distribution while reducing computational load |
155
+ | **Rating = 0 → NaN** | Replaced zero ratings with NaN | Zero means unrated, not bad — imputing would distort analysis |
156
+ | **Rating not imputed** | Kept 54.9% of Rating as NaN, created `has_rating` instead | Missing rating is a meaningful signal, not random noise |
157
+ | **Size_MB winsorized** | Capped at 65.8 MB (IQR upper fence) | Extreme sizes are edge cases that distort correlations |
158
+ | **Installs kept as-is** | No capping on extreme install counts | Viral outliers ARE the story — removing them defeats the purpose |
159
+ | **log_installs** | Applied log(1 + Installs) transformation | Raw installs follow a power-law — log scale is needed for analysis |
160
+ | **monetization_model** | Combined Free + Ad Supported + In App Purchases into 5 categories | Three boolean columns carry more meaning as a single business model feature |
161
+ ```
162
+
163
  ## 👤 Author
164
 
165
  **Yonathan Levy**