row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
40,658
|
write powershell script which copies an exe file from network share and starts it in hidden mode not showing any windows. Dont use any syntax representing code just plain text
|
f7d51645ea453ca63236ace981ded573
|
{
"intermediate": 0.28143811225891113,
"beginner": 0.447664350271225,
"expert": 0.2708975076675415
}
|
40,659
|
hi
|
36364bae3532b06646b2aa02dfaa232c
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
40,660
|
Do pre-queue kernel buffer and receive user space buffers coexist on OpenBSD for a tcp connections?
|
e17044876d1472d363406f930c1f148b
|
{
"intermediate": 0.42564743757247925,
"beginner": 0.2817729115486145,
"expert": 0.29257968068122864
}
|
40,661
|
Сделай сео оптимизацию по запросу "как рассчитывается зарплата", напиши title, meta description и keywords для следующего html:
<!DOCTYPE html>
<html lang="ru">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Open Graph / Facebook Meta Tags -->
<meta property="og:type" content="website">
<meta property="og:url" content="https://prostosmena.ru/">
<meta property="og:title" content="График смен 2/2 - рассчитайте бесплатно онлайн-калькулятором ProstoSmena.Ru">
<meta property="og:description" content="Упростите создание сменного графика с помощью нашего онлайн-калькулятора и сэкономьте ваше время для важных задач. Любой график смен: день/ночь/свободный">
<meta property="og:image" content="https://prostosmena.ru/thumb.jpg">
<!-- Twitter Card Meta Tags -->
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:url" content="https://prostosmena.ru/">
<meta name="twitter:title" content="График смен 2/2 - рассчитайте бесплатно онлайн-калькулятором ProstoSmena.Ru">
<meta name="twitter:description" content="Упростите создание сменного графика с помощью нашего онлайн-калькулятора и сэкономьте ваше время для важных задач. Любой график смен: день/ночь/свободный">
<meta name="twitter:image" content="https://prostosmena.ru/thumb.jpg">
<link rel="icon" href="/favicon.svg" sizes="any" type="image/svg+xml">
</head>
<body>
<header class="header">
<div class="nav-container">
<a href="/" title="На главную">
<img class="logo" src="/favicon.svg" width="42" height="42" alt="Логотип ProstoSmena.Ru">
</a>
<span class="sitename">ProstoSmena</span>
<input type="checkbox" id="nav-toggle" class="nav-toggle">
<nav>
<label for="nav-toggle" class="hamburger">
<span></span>
<span></span>
<span></span>
</label>
<ul class="nav-menu">
<li><a href="/">Рассчитать смены</a></li>
<li><a href="/zarplata">Зарплата</a></li>
<li><a href="/otpusk">Отпуск</a></li>
<li><a href="/blog">Блог</a></li>
<li><a href="/history">История</a></li>
<li><a href="/feedback">Контакты</a></li>
</ul>
</nav>
</div>
</header>
<main>
<article class="wrapper">
<h1>Рассчитать зарплату онлайн </h1>
<form action="zp.php" method="get">
<table border="0" cellspacing="1" cellpadding="1">
<tbody><tr><td>Введите количество дневных часов: </td><td><input name="dchas" type="text" value="10" size="5"></td></tr>
<tr><td>Введите количество ночных часов: </td><td><input name="nchas" type="text" value="10" size="5"></td></tr>
<tr><td>Введите сколько стоит 1 час Вашей работы: </td><td><input name="schas" type="text" value="10" size="5"></td></tr>
<tr><td>Введите сумму вашей премии:</td><td><input name="prem" type="text" value="10" size="5"></td></tr>
<tr><td>Выберите тип оплаты:</td>
<td><select name="typezp">
<option selected="" value="1">С Налогами</option>
<option value="0">Чистыми</option>
</select></td></tr>
<tr><td colspan="2"><button type="submit">Рассчитать зарплату онлайн!</button></td></tr>
</tbody></table>
</form>
<fieldset class="fieldset"><legend><h2> Расчёт: </h2></legend>
<table class="zp" border="0" cellspacing="1" cellpadding="1">
<tbody><tr><td align="right">Cумма оплаты за дневные часы:</td><td><strong>100 руб.</strong></td></tr>
<tr><td align="right">Cумма оплаты за ночные часы:</td><td><strong>100 руб.</strong></td></tr>
<tr><td align="right">Cумма доплаты за ночные часы:<br><small>(+20% согласно ТК РФ.)</small></td><td><strong>20 руб.</strong></td></tr>
<tr><td align="right">Премия:</td><td><strong>10 руб.</strong></td></tr>
<tr><td align="right">Платим налоги:</td><td><strong>29.9 руб.</strong></td></tr>
<tr><td align="right">Сумма к оплате "на руки":</td><td><strong>200.1 руб.</strong></td></tr>
</tbody></table>
</fieldset>
</article>
</main>
<footer class="footer">
<p><span class="nop">Сервисом ProstosMena воспользовались <span class="nobr">24 259 699 раз.</span></span><br>
© 2024 ProstoSmena.Ru</p>
</footer>
</body>
</html>
|
9430bf28f5cf98ed8f7c4791b7e5fefa
|
{
"intermediate": 0.19741857051849365,
"beginner": 0.5798467397689819,
"expert": 0.22273467481136322
}
|
40,662
|
implement a simple lexer in rust which tokenize python indent and dedent
|
902c23727132f58d774aaa2c76ec5318
|
{
"intermediate": 0.4173460304737091,
"beginner": 0.25884366035461426,
"expert": 0.32381030917167664
}
|
40,663
|
i didnt understand why we called dopamine and dobutamine sympathomimetics and catcholamine?
|
ee57e1239864afeafef08a5b200c6dd8
|
{
"intermediate": 0.3550541400909424,
"beginner": 0.3268493115901947,
"expert": 0.3180965483188629
}
|
40,664
|
I have file "ips.txt" with list of ip addresses including ipv6. There is one ip at each line. Write Python script to read them, remove duplicates and save to ips2.txt
|
2ed92eeb0101823cf1098b15cb07e37f
|
{
"intermediate": 0.41737455129623413,
"beginner": 0.23837226629257202,
"expert": 0.3442532420158386
}
|
40,665
|
Give Windows command for setup firewall to block any incoming and outgoing connections for all apps and ports for 5.5.5.5 and 4.4.4.4 ip addresses
|
97d49f99a52add21c9a095ad5cbd85b5
|
{
"intermediate": 0.37186574935913086,
"beginner": 0.20651431381702423,
"expert": 0.4216199815273285
}
|
40,666
|
How to inject selfie with appium java android emulator?
|
60837458be8ea727ac7d8055da5ae213
|
{
"intermediate": 0.4784911274909973,
"beginner": 0.1723131686449051,
"expert": 0.3491957485675812
}
|
40,667
|
how many stars are in the universe
|
0e45cc77594ca836f84ce3bf68e3e525
|
{
"intermediate": 0.364246666431427,
"beginner": 0.3810420632362366,
"expert": 0.25471121072769165
}
|
40,668
|
Problem Statement. Problam Statement: You are a top chef at a renowned restaurant known for its innovative fusion dishes. For the upcoming food festival, you plan to introduce dishes that are a combination of ingredients from two distinct culinary traditions. To make this a success, you want to pair ingredients such that the combined flavors are subtle and not overpowering.
You have two lists, A and B, representing the Ingredients from two different culinary traditions. Each list contains N unique ingredients, measured by their lovor intensity on a scale. The intensity at index i in list A represents the flavor of the ith ingredient from Tradition A, and similarly for list B
Your task is to implement a function findSmallest5ums(A, B, K) that takes the two lists A and B, as well as an integer K as input and returns a list of K pairs with the smallest combined flavor intensities. The pairs should be sorted in ascending order of their combined intensity
In case there are multiple pairs with the same combined intensity, you should prioritize the pairs with the milder flavors from list A. If there are still ties, prefer the pairs with milder flavors from list B
Function Signature: findSmallest Sums(A: List[int], 8: List[int], K: int) List[list[int, int]]
Example: Consider the following scenario:
A-[2, 3, 5]
B=[4,6,8]
K=3
In this scenario, we have ingredients from two culinary traditions with three ingredients each. Your task is to find the three pairs with the smallest combined flavor intensities.
|
2b83a274b1876c2b79e15d6e28e4aa0b
|
{
"intermediate": 0.2515467405319214,
"beginner": 0.2843603491783142,
"expert": 0.464092880487442
}
|
40,669
|
### ----------------------------------
### MINIMUM TECHNICAL REQUIREMENTS ###
### ----------------------------------
# 1) Import datetime
import datetime
###-----------------------------------------
### Create a function to create a spring scene
###-----------------------------------------
###-----------------------------------------
### Create a function to create a summer scene
###-----------------------------------------
###-----------------------------------------
### Create a function to create a fall scene
###-----------------------------------------
###-----------------------------------------
### Create a function to create a winter scene
###-----------------------------------------
###---------------------------------------------------
### Create a function to set the season for the users
### birth month
###---------------------------------------------------
###-----------------------------------
### Create a main program to calculate the number of days the
### user has been alive and to call functions
### to set the season for your birth month
###-----------------------------------
#1) Make variables year, month, day. Set their values to your
# birthday. Use yyyy, mm, dd (can use an ask)
# 2) Create a new datetime or date object using the year, month,
# and day of your birthday
# 3) Get the Current Datetime
# 4) Get the current date from Datetime
# 5) Subtract your birth date from the current date
# to get a timedelta
# 6) Find the number of days from my_timedelta
CREATE: Create a program that calculates how many days you've been alive and displays a scene for the season in which you were born!
with Naming Conventions for Computer Science moving forward.
I am not telling you what to name it, but rather examples of how to name it.
Function names
We will be using camel casing
Example: def thisIsAPracticeFunctionName ()
Variables will be using _ to separate the names.
Example of a variable name: this_is_a_variable_name
**** make sure that the names of these items are defining either that it controls or what changes.
def playerMovement():
def mazeWalls():
def createsTheMaze():
def upKey():
or
car_speed =
player_movement_x =
up_movement =
the_player =
|
d5c3a46a6817da111befd4475ce1209a
|
{
"intermediate": 0.2565050423145294,
"beginner": 0.49414926767349243,
"expert": 0.2493457794189453
}
|
40,670
|
Hi
|
a8a616155e5cca484e1bb075f9e656da
|
{
"intermediate": 0.33010533452033997,
"beginner": 0.26984941959381104,
"expert": 0.400045245885849
}
|
40,671
|
Problem Statement: Problem Statement: You are a top chef at a renowned restaurant known for its innovative fusion dishes. For the upcoming food festival, you plan to introduce dishes that are a combination of ingredients from two distinct culinary traditions. To make this a success, you want to pair ingredients such that the combined flavors are subtle and not overpowering
You have two lists, A and B, representing the ingredients from two different culinary traditions. Each list contains N unique ingredients, measured by their flavor intensity on a scale. The intensity at index i in list A represents the flavor of the ith ingredient from Tradition A, and similarly for list B
Your task is to implement a function findSmallest Sums(A, B, K) that takes the two lists A and B, as well as an integer Klas input and returns a list of K pairs with the smallest combined flavor intensities. The pairs should be sorted in ascending order of their combined intensity.
In case there are multiple pairs with the same combined intensity, you should prioritize the pairs with the milder flavors from list A. If there are still ties, prefer the pairs with milder flavors from list B.
Function Signature: findSmallest Sums(A List[int] B. List(int), K int) List[list[int int[]
Example: Consider the following scenario A=[2,3,5] B=[4,6,8] K=3 In this scenario, we have ingredients from two culinary traditions with three ingredients each. Your task is to find the three pairs with the smallest combined flavor intensities.
Possible pairs and their combined intensities:
\{2, 4\} > Sum / 6
\{3, 4\} > 5um / 7
12. 6) Sum: 8
\{3, 6\} > 5um * 9
(2, 8) > 5um / (10')
(5. 4) Sum: 9
import java.util.
class Paint
public static
//write you
public static
10
Scanner sc
11
int sizet Int sizez
12
13
14
int[A
15
16
for(int
12
20
Int
for (int 1-
21
24
26
20
[5, 6] > Sum / 11
\{5, 8\} > Sum * 13
The three pairs with the combined intensities are (2, 4), (3, 4), and 12, 6), which have combined intensities of 6, 7, and 8 respectively
Hence, the function call findSmallestSuma(A, B, K) should return [(2, 4), (3, 4), (2,6)]
|
eac582fe1ed97ae5665062c50ff5f66d
|
{
"intermediate": 0.30429303646087646,
"beginner": 0.39931678771972656,
"expert": 0.29639023542404175
}
|
40,672
|
Fix this file so that it scrapes the web properly:
# python3 main.py to execute terminal
# version: 1.0 Feb 08 2024, operators up to date: April 25, 2023
import sys
from bs4 import BeautifulSoup as bs4
import requests
import webbrowser
from datetime import datetime
import bs4
chrome_path = "C:/Program Files (x86)/Google/Chrome/Application/chrome.exe"
webbrowser.register('chrome', None, webbrowser.BackgroundBrowser(chrome_path))
url = "https://google.com/search?q="
query_result = requests.get(url)
soup = bs4.BeautifulSoup(query_result.text, "html.parser")
headings = soup.find_all('h3', class_ = "LC20lb MBeuO DKV0Md")
description = soup.find_all('span')
definitions = soup.find_all('div', class_ = "PZPZlf")
# asking for query
print("Hello there! \nInput a query for enhanced search:")
object = input("")
print("\nConfirm: " + "'" + object + "'" + "? (y/n)")
object2 = input("")
def parser_change():
soup = bs4.BeautifulSoup(query_result.text, "lxml")
print("correct parser change function: query for each function")
sys.exit()
#1
def normal_search():
query = f"'{object}'"
webbrowser.open_new_tab(url + query)
#something gets fucked up here: maybe its requests or something BUT
# CODE DOESN'T KNOW WHERE TO FIND THE HEADINGS
# DISCONNECT BETWEEN HEADINGS AND THE LINK SEARCHED UP
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
heading.prettify()
print(heading)
description.strip()
description.prettify()
print(description)
print(f"\nScrape on '{object}' complete")
#DOESNT WORK
#changing the parser to lxml if html parser doesn't work
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#28
def recent_search():
recent_year = datetime.now().year - 1
query = object + " after:" + str(recent_year)
webbrowser.open_new_tab(url + query)
for heading in headings:
print('\nFetching: heading(s), description(s)')
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
#changing the parser to lxml if html parser doesn't work
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#2
def add_or():
print(f"Add 'or' query to '{object}' below:")
query_added = input("")
query = object + " | " + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' | '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' | '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#3
def add_query():
print(f"Add 'AND' query to '{object}' below:")
query_added = input("")
query = object + " AND " + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' AND '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' AND '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#4
def exclude_query():
print(f"Exclude a query from '{object}'?")
query_added = input("")
query = object + " -" + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' -'{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' -'{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#5
def wildcard():
print(f"Add wilcard query to: '{object}'")
query_added = input("")
query = object + " * " + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' * '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' * '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#27
def pemdas():
print(f"Add higher hierarchy query to '{object}':")
query_added = input("")
query = f"({query_added})" " " + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{query}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{query}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#26
def pemdas_and():
print(f"Add higher hierarchy query to '{object}' before 'AND' operator:")
query_added = input("")
print(f"Add second hierarchy query to '{object}' after 'AND' operator:")
query_added2 = input("")
#query = f"({query_added} AND {query_added2}) + {object}"
query = "(" + query_added + " AND " + query_added2 + ") " + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{query}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{query}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#25
def pemdas_or():
print(f"Add higher hierarchy query to '{object}' before 'OR' operator:")
query_added = input("")
print(f"Add second hierarchy query to '{object}' after 'OR' operator:")
query_added2 = input("")
#query = f"({query_added} AND {query_added2}) + {object}"
query = "(" + query_added + " OR " + query_added2 + ") " + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{query}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{query}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#6
def define():
query = 'define:' + object
webbrowser.open_new_tab(url + query)
for definition in definitions:
definition.strip()
print(definition)
print(f"\nScrape on '{query}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#7
def cache():
print(f"Find most recent cache of a webpage: Enter full url:")
cache = input("")
query = 'cache:' + cache
webbrowser.open_new_tab(url + query)
print(f"landed on '{cache}'")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{query}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#8
def filetype():
print(f"Add filetype to '{object}'")
query_added = input("")
query = object + " filetype:" + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#9
def site_specify():
print(f"Add site to specify to '{object}'")
query_added = input("")
query = object + " site:" + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#10
def related_sites():
print(f"Add related site (url):")
query_added = input("")
query = object + " related:" + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#11
def in_title():
print(f"Searching for pages with '{object}' in title")
query = "intitle:" + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{query}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{query}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#12
def all_in_title():
print(f"Enter more terms to add to '{object}'")
query_added = input("")
query = "allintitle:" + object + " "+ query_added
print('Your query will look like this: ' + query)
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#13
def in_url():
print(f"Searching for '{object}' in url")
query = "inurl:" + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{query}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{query}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#14
def all_in_url():
print(f"Enter more terms to add to '{object}'")
query_added = input("")
query = "allinurl:" + object + query_added
print('Your query will look like this: ' + query)
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#15
def in_text():
print(f"Searching for '{object}' in pages")
query = "intext:" + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{query}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{query}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#16
def all_in_text():
print(f"Enter more terms to add to '{object}'")
query_added = input("")
query = "allintext:" + object + " " + query_added
print('Your query will look like this: ' + query)
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#17
def weather():
print("Enter location:")
loc = input("")
query = "weather:" + loc
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#18 TEST
def stocks():
print("Enter ticker, ensure accuracy:")
tkr = input("")
query = "stocks:" + tkr
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#19 TEST
def map():
print("Enter location, ensure accuracy:")
loc = input("")
query = "map:" + loc
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{loc}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{loc}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#20 TEST
def movie():
print(f"Searching for movie information on '{object}'")
query = "movie:" + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#21 TEST unit converter
def In():
print("Enter primary unit (convert from):")
pri_uni = input("")
print("Secondary unit (convert to)")
sec_uni = input("")
query = pri_uni + " in " + sec_uni
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{pri_uni}' and '{sec_uni}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{pri_uni}' and '{sec_uni}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#22 TEST
def source_segregate():
print("Enter source to constrict search by:")
src = input("")
query = object + " source:" + src
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#23 TEST
def before():
print("Enter date to search before (Y-M-D), Ex: 2001-09-29")
date = input("")
query = object + "before:" + date
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#24 TEST
def after():
print("Enter date to search after (Y-M-D), Ex: 2001-09-29")
date = input("")
query = object + "after:" + date
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
# ------------- unreliable 50/50 operators
#29 TEST
def range():
print(f"Add minimum range constraint on '{object}', Ex: $10..#")
max = input("")
print(f"Add maximum range constraint on '{object}', Ex: #..$50")
min = input("")
query = object + " " + max + ".." + min
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#30 TEST
def in_anchor():
print(f"Search for pages with backlinks containing '{object}'anchor text")
query = "inanchor:" + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#31
def all_in_anchor():
print(f"Add another query to '{object}' to anchor text:")
query_added = input("")
query = "inanchor:" + object + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#32 TEST
def around():
print("Enter second query:")
query_added = input("")
print("Within how many words (number):")
num = input("")
query = object + "AROUND(" + num + ")" + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#33 TEST
def loc():
print(f"Specify an area for '{object}'")
loc = input("")
query = "loc:" + loc + " " + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#34 TEST
def location():
print("Add location to search google news with:")
loc = input("")
query = "location:" + loc + " " + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
# if query is incorrect program ends
# QUERY SELECTION
if(object2 == "y"):
# asking if user wants a recent search, then printing in terminal
print(f"\n*Normal search on '{object}' (1)\n Adds:\n None ---> meta tag Ex: 'steve jobs'")
print(f"\n*Add 'or' query to '{object}' (2)\n Adds:\n '|' OR operator ---> meta tag Ex: jobs OR internships")
print(f"\n*Add 'and' query to '{object}' (3)\n Adds:\n 'AND' operator ---> meta tag Ex: jobs AND internships")
print(f"\n*Add: 'search exclusion' query to '{object}' (4)\n Adds:\n '-' operator ---> meta tag Ex: internships -bayer [Search for results that don't mention a word or phrase]")
print(f"\n*Add: 'wildcard' query to '{object}' (5)\n Adds:\n '*' operator ---> meta tag Ex: jobs * internships")
print(f"\n*Define '{object}' (6)\n Adds:\n 'define' operator ---> meta tag(s)")
print(f"\n*Find most recent cache on website (7)\n Adds: 'cache:' ---> meta tag Ex: cache:bayer.com")
print(f"\n*Search for filetype on '{object}' (8)\n Adds: 'filetype:' ---> meta tag Ex: bayer filetype:pdf")
print(f"\n*Add site specificty on '{object}' (9)\n Adds: 'site:' ---> meta tag Ex: site: bayer.com")
print(f"\n*Search for related sites to a given domain: (10)\n Adds: 'related:' ---> meta tag Ex: related:bayer.com")
print(f"\n*Search for '{object}' in title tags MUST ONLY BE 1 WORD (11)\n Adds: 'intitle:' ---> meta tag Ex: intitle:'{object}'")
print(f"\n*Search for pages with multiple words in the title tag. Add to '{object}', ensure proper spacing: (12)\n Adds: 'allintitle' ---> meta tag Ex: allintitle: '{object}' + '{object}'")
print(f"\n*Search for pages with '{object}' in URL (13)\n Adds: 'inurl:' ---> meta tag) Ex: inurl:bayer")
print(f"\n*Search for pages with multiple words in URL. Add to '{object}', ensure proper spacing (14)\n Adds: 'allinurl' ---> meta tag Ex: allinurl: '{object}' + '{object}'")
print(f"\n*Search for '{object}' in content of webpages (general) (15)\n Adds: 'intext:' ---> meta tag Ex: intext:'{object}'")
print(f"\n*Search for pages with multiple words in TEXT. Add to '{object}', ensure proper spacing (16)\n Adds: 'allintext:' ---> meta tag Ex: allintext: '{object}' + '{object}'")
print(f"\n*Search for weather in specified location (17)\n Adds: 'weather:' ---> meta tag Ex: weather:Miami Beach")
print(f"\n*Search for stock information on a ticker (18)\n Adds: 'stocks:' ---> meta tag Ex: stocks:BMW.DE")
print(f"\n*Pull up map on location, ensure proper spacing (19)\n Adds: 'map:' ---> meta tag Ex: map:silcon valley")
print(f"\n*Search for movie information on '{object}' (20)\n Adds: 'movie:' ---> meta tag Ex: movie:oppenheimer")
print(f"\n*Convert one unit to another, deprecates '{object}' (21)\n Adds: 'in' ---> meta tag Ex: $400 in GBP")
print(f"\n*Search for results from a specific source in Google News on '{object}' (22)\n Adds: 'source:' ---> meta tag Ex: bayer source: washington post")
print(f"\n*Search for results on '{object}' before a particular date (Y-M-D) (23)\n Adds: 'before' ---> meta tag Ex: bayer before:2001-07-03")
print(f"\n*Search for results on '{object}' after a particular date (Y-M-D) (24)\n Adds: 'after' ---> meta tag Ex: bayer before:2001-07-03")
print(f"\n*Add superior 'OR' hierarchal query on '{object}' (25)\n Adds:\n '(x OR x)' ---> meta tag [parenthesis priooritized] Ex: (jobs OR internships) bayer")
print(f"\n*Add superior 'AND' hierarchal query on '{object}' (26)\n Adds:\n '(x AND x)' ---> meta tag [parentheses prioritized] Ex: (jobs AND internships) bayer")
print(f"\n*Add superior hierarchal query on '{object}' (27)\n Adds:\n '()' ---> meta tag Ex: (superior query) '{object}'")
print(f"\n*Recent scrape on '{object}' (28)\n Adds:\n 'after:currentyear - 1' ---> meta tag")
# 50/50 operators
print("--------------------------\nUnreliable 50/50 operators:")
print(f"\n*Search between a range of numbers on '{object}', add signs if necessary. (29)\n Adds: '#..#' ---> meta tag Ex: '{object}' ")
print(f"\n*Search for pages with backlinks with '{object}' anchor text (30)\n Adds: 'inanchor: ---> meta tag Ex: inanchor:'{object}'")
print(f"\n*Search for pages with backlinks with '{object}' and multiple words in their anchor text (31)\n Adds: 'allinanchor:' ---> meta tag Ex: allinanchor:'{object}' + '{object}')")
print(f"\n*Search for pages with two queries (query1 = '{object}') within X words of one another (32\n Adds: 'AROUND' ---> meta tag Ex: bayer AROUND(3) internship)")
print(f"\n*Find results from a given area, ensure proper spacing (33)\n Adds: 'loc:' ---> meta tag Ex: loc: 'miami beach' '{object}'")
print(f"\n*Find news from a certain location for '{object}' in Google News (34)\n Adds: 'location:' ---> meta tag Ex: location:'miami beach' '{object}'")
elif(object2 != "y"):
print("\nending program, restart query if desired")
sys.exit()
chooseMode = input("")
if(chooseMode == "1"):
normal_search()
elif(chooseMode == "2"):
add_or()
elif(chooseMode == "3"):
add_query()
elif(chooseMode == "4"):
exclude_query()
elif(chooseMode == "5"):
wildcard()
elif(chooseMode == "6"):
define()
elif(chooseMode == "7"):
cache()
elif(chooseMode == "8"):
filetype()
elif(chooseMode == "9"):
site_specify()
elif(chooseMode == "10"):
related_sites()
elif(chooseMode == "11"):
in_title()
elif(chooseMode == "12"):
all_in_title()
elif(chooseMode == "13"):
in_url()
elif(chooseMode == "14"):
all_in_url()
elif(chooseMode == "15"):
in_text()
elif(chooseMode == "16"):
all_in_text()
elif(chooseMode == "17"):
weather()
elif(chooseMode == "18"):
stocks()
elif(chooseMode == "19"):
map()
elif(chooseMode == "20"):
movie()
elif(chooseMode == "21"):
In()
elif(chooseMode == "22"):
source_segregate()
elif(chooseMode == "23"):
before()
elif(chooseMode == "24"):
after()
elif(chooseMode == "25"):
pemdas_or()
elif(chooseMode == "26"):
pemdas_and()
elif(chooseMode == "27"):
pemdas()
elif(chooseMode == "28"):
recent_search()
# unreliable 50/50 operators
elif(chooseMode == "29"):
range()
elif(chooseMode == "30"):
in_anchor()
elif(chooseMode == "31"):
all_in_anchor()
elif(chooseMode == "32"):
around()
elif(chooseMode == "33"):
loc()
elif(chooseMode == "34"):
location()
#checking to see if input = correct, else terminate
numbers = list(range(1, 35))
if chooseMode not in numbers:
print("\nincorrect input, terminating program")
sys.exit()
|
d27e90daa473e325bc0c91ccbeb5771a
|
{
"intermediate": 0.3864837884902954,
"beginner": 0.3643016219139099,
"expert": 0.2492145597934723
}
|
40,673
|
if (data.name == "Leaderboard") {
const currentDate = new Date().toLocaleString();
this.counter++;
if (this.counter == 2) {
let responseList = '';
data.response.forEach(item => {
responseList += `- Name: ${item.name}, UID: ${item.uid}, Rank: ${item.rank}, Score: ${item.score}, Wave: ${item.wave}\n`;
});
const payload = {
content: `**Leaderboard response**\nDate: ${currentDate}\nServer ID: ${this.serverId}\nResponse:\n${responseList}`,
username: this.name,
};
console.log(payload);
fetch('https://discord.com/api/webhooks/1212396811772493864/gQvlpXhoxoFMUZKPZ3Bprkc8tEOLSi0eCY9vU7x2lJVXSpzWSwoVvhQCS1f7fsSjwDvw', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(payload),
})
.then(response => {
if (!response.ok) {
throw new Error('Failed to send message to Discord webhook');
}
console.log('Message sent to Discord webhook');
})
.catch(error => {
console.error('Error sending message to Discord webhook:', error);
});
this.ws.close();
} else {
return;
}
}
Improve and optimize this code along with reducing its complexity
|
f51f73e1e6f56d5ddf4feecc61ca0bc6
|
{
"intermediate": 0.24677713215351105,
"beginner": 0.3889637291431427,
"expert": 0.36425912380218506
}
|
40,674
|
hi
|
a0e74d33d689ed49ad518a95612dd895
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
40,675
|
what is a fs access in bucket in ECS
|
529c779af70ecaff6cc3dd0a55f4a85d
|
{
"intermediate": 0.45306211709976196,
"beginner": 0.1922033578157425,
"expert": 0.35473448038101196
}
|
40,676
|
Fix the individual methods so that they scrape properly. Do not touch anything else:
# python3 main.py to execute terminal
# version: 1.0 Feb 08 2024, operators up to date: April 25, 2023
import sys
from bs4 import BeautifulSoup as bs4
import requests
import webbrowser
from datetime import datetime
chrome_path = "C:/Program Files (x86)/Google/Chrome/Application/chrome.exe"
webbrowser.register('chrome', None, webbrowser.BackgroundBrowser(chrome_path))
url = "https://google.com/search?q="
query_result = requests.get(url)
soup = bs4.BeautifulSoup(query_result.text, "html.parser")
headings = soup.find_all('h3', class_ = "LC20lb MBeuO DKV0Md")
description = soup.find_all('span')
definitions = soup.find_all('div', class_ = "PZPZlf")
# asking for query
print("Hello there! \nInput a query for enhanced search:")
object = input("")
print("\nConfirm: " + "'" + object + "'" + "? (y/n)")
object2 = input("")
def parser_change():
soup = bs4.BeautifulSoup(query_result.text, "lxml")
print("correct parser change function: query for each function")
sys.exit()
#1
def normal_search():
query = f"'{object}'"
webbrowser.open_new_tab(url + query)
#something gets fucked up here: maybe its requests or something BUT
# CODE DOESN'T KNOW WHERE TO FIND THE HEADINGS
# DISCONNECT BETWEEN HEADINGS AND THE LINK SEARCHED UP
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
heading.prettify()
print(heading)
description.strip()
description.prettify()
print(description)
print(f"\nScrape on '{object}' complete")
#DOESNT WORK
#changing the parser to lxml if html parser doesn't work
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#28
def recent_search():
recent_year = datetime.now().year - 1
query = object + " after:" + str(recent_year)
webbrowser.open_new_tab(url + query)
for heading in headings:
print('\nFetching: heading(s), description(s)')
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
#changing the parser to lxml if html parser doesn't work
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#2
def add_or():
print(f"Add 'or' query to '{object}' below:")
query_added = input("")
query = object + " | " + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' | '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' | '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#3
def add_query():
print(f"Add 'AND' query to '{object}' below:")
query_added = input("")
query = object + " AND " + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' AND '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' AND '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#4
def exclude_query():
print(f"Exclude a query from '{object}'?")
query_added = input("")
query = object + " -" + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' -'{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' -'{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#5
def wildcard():
print(f"Add wilcard query to: '{object}'")
query_added = input("")
query = object + " * " + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' * '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' * '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#27
def pemdas():
print(f"Add higher hierarchy query to '{object}':")
query_added = input("")
query = f"({query_added})" " " + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{query}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{query}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#26
def pemdas_and():
print(f"Add higher hierarchy query to '{object}' before 'AND' operator:")
query_added = input("")
print(f"Add second hierarchy query to '{object}' after 'AND' operator:")
query_added2 = input("")
#query = f"({query_added} AND {query_added2}) + {object}"
query = "(" + query_added + " AND " + query_added2 + ") " + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{query}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{query}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#25
def pemdas_or():
print(f"Add higher hierarchy query to '{object}' before 'OR' operator:")
query_added = input("")
print(f"Add second hierarchy query to '{object}' after 'OR' operator:")
query_added2 = input("")
#query = f"({query_added} AND {query_added2}) + {object}"
query = "(" + query_added + " OR " + query_added2 + ") " + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{query}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{query}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#6
def define():
query = 'define:' + object
webbrowser.open_new_tab(url + query)
for definition in definitions:
definition.strip()
print(definition)
print(f"\nScrape on '{query}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#7
def cache():
print(f"Find most recent cache of a webpage: Enter full url:")
cache = input("")
query = 'cache:' + cache
webbrowser.open_new_tab(url + query)
print(f"landed on '{cache}'")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{query}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#8
def filetype():
print(f"Add filetype to '{object}'")
query_added = input("")
query = object + " filetype:" + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#9
def site_specify():
print(f"Add site to specify to '{object}'")
query_added = input("")
query = object + " site:" + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#10
def related_sites():
print(f"Add related site (url):")
query_added = input("")
query = object + " related:" + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#11
def in_title():
print(f"Searching for pages with '{object}' in title")
query = "intitle:" + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{query}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{query}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#12
def all_in_title():
print(f"Enter more terms to add to '{object}'")
query_added = input("")
query = "allintitle:" + object + " "+ query_added
print('Your query will look like this: ' + query)
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#13
def in_url():
print(f"Searching for '{object}' in url")
query = "inurl:" + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{query}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{query}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#14
def all_in_url():
print(f"Enter more terms to add to '{object}'")
query_added = input("")
query = "allinurl:" + object + query_added
print('Your query will look like this: ' + query)
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#15
def in_text():
print(f"Searching for '{object}' in pages")
query = "intext:" + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{query}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{query}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#16
def all_in_text():
print(f"Enter more terms to add to '{object}'")
query_added = input("")
query = "allintext:" + object + " " + query_added
print('Your query will look like this: ' + query)
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' '{query_added}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' '{query_added}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#17
def weather():
print("Enter location:")
loc = input("")
query = "weather:" + loc
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#18 TEST
def stocks():
print("Enter ticker, ensure accuracy:")
tkr = input("")
query = "stocks:" + tkr
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#19 TEST
def map():
print("Enter location, ensure accuracy:")
loc = input("")
query = "map:" + loc
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{loc}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{loc}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#20 TEST
def movie():
print(f"Searching for movie information on '{object}'")
query = "movie:" + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#21 TEST unit converter
def In():
print("Enter primary unit (convert from):")
pri_uni = input("")
print("Secondary unit (convert to)")
sec_uni = input("")
query = pri_uni + " in " + sec_uni
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{pri_uni}' and '{sec_uni}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{pri_uni}' and '{sec_uni}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#22 TEST
def source_segregate():
print("Enter source to constrict search by:")
src = input("")
query = object + " source:" + src
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#23 TEST
def before():
print("Enter date to search before (Y-M-D), Ex: 2001-09-29")
date = input("")
query = object + "before:" + date
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#24 TEST
def after():
print("Enter date to search after (Y-M-D), Ex: 2001-09-29")
date = input("")
query = object + "after:" + date
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
# ------------- unreliable 50/50 operators
#29 TEST
def range():
print(f"Add minimum range constraint on '{object}', Ex: $10..#")
max = input("")
print(f"Add maximum range constraint on '{object}', Ex: #..$50")
min = input("")
query = object + " " + max + ".." + min
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#30 TEST
def in_anchor():
print(f"Search for pages with backlinks containing '{object}'anchor text")
query = "inanchor:" + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#31
def all_in_anchor():
print(f"Add another query to '{object}' to anchor text:")
query_added = input("")
query = "inanchor:" + object + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#32 TEST
def around():
print("Enter second query:")
query_added = input("")
print("Within how many words (number):")
num = input("")
query = object + "AROUND(" + num + ")" + query_added
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#33 TEST
def loc():
print(f"Specify an area for '{object}'")
loc = input("")
query = "loc:" + loc + " " + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
#34 TEST
def location():
print("Add location to search google news with:")
loc = input("")
query = "location:" + loc + " " + object
webbrowser.open_new_tab(url + query)
for heading in headings:
print("\nFetching: heading(s), description(s)")
heading.strip()
print(heading)
description.strip()
print(description)
print(f"\nScrape on '{object}' complete")
parser_check = input("\nUnsatisfied with return, change parser to 'lxml'? (y/n)\n")
print("Add spaces around operators for more results (works sometimes)")
if (parser_check == "y"):
parser_change()
print(f"\nScrape on '{object}' complete")
elif (parser_check == "n"):
print('terminating program')
sys.exit()
# if query is incorrect program ends
# QUERY SELECTION
if(object2 == "y"):
# asking if user wants a recent search, then printing in terminal
print(f"\n*Normal search on '{object}' (1)\n Adds:\n None ---> meta tag Ex: 'steve jobs'")
print(f"\n*Add 'or' query to '{object}' (2)\n Adds:\n '|' OR operator ---> meta tag Ex: jobs OR internships")
print(f"\n*Add 'and' query to '{object}' (3)\n Adds:\n 'AND' operator ---> meta tag Ex: jobs AND internships")
print(f"\n*Add: 'search exclusion' query to '{object}' (4)\n Adds:\n '-' operator ---> meta tag Ex: internships -bayer [Search for results that don't mention a word or phrase]")
print(f"\n*Add: 'wildcard' query to '{object}' (5)\n Adds:\n '*' operator ---> meta tag Ex: jobs * internships")
print(f"\n*Define '{object}' (6)\n Adds:\n 'define' operator ---> meta tag(s)")
print(f"\n*Find most recent cache on website (7)\n Adds: 'cache:' ---> meta tag Ex: cache:bayer.com")
print(f"\n*Search for filetype on '{object}' (8)\n Adds: 'filetype:' ---> meta tag Ex: bayer filetype:pdf")
print(f"\n*Add site specificty on '{object}' (9)\n Adds: 'site:' ---> meta tag Ex: site: bayer.com")
print(f"\n*Search for related sites to a given domain: (10)\n Adds: 'related:' ---> meta tag Ex: related:bayer.com")
print(f"\n*Search for '{object}' in title tags MUST ONLY BE 1 WORD (11)\n Adds: 'intitle:' ---> meta tag Ex: intitle:'{object}'")
print(f"\n*Search for pages with multiple words in the title tag. Add to '{object}', ensure proper spacing: (12)\n Adds: 'allintitle' ---> meta tag Ex: allintitle: '{object}' + '{object}'")
print(f"\n*Search for pages with '{object}' in URL (13)\n Adds: 'inurl:' ---> meta tag) Ex: inurl:bayer")
print(f"\n*Search for pages with multiple words in URL. Add to '{object}', ensure proper spacing (14)\n Adds: 'allinurl' ---> meta tag Ex: allinurl: '{object}' + '{object}'")
print(f"\n*Search for '{object}' in content of webpages (general) (15)\n Adds: 'intext:' ---> meta tag Ex: intext:'{object}'")
print(f"\n*Search for pages with multiple words in TEXT. Add to '{object}', ensure proper spacing (16)\n Adds: 'allintext:' ---> meta tag Ex: allintext: '{object}' + '{object}'")
print(f"\n*Search for weather in specified location (17)\n Adds: 'weather:' ---> meta tag Ex: weather:Miami Beach")
print(f"\n*Search for stock information on a ticker (18)\n Adds: 'stocks:' ---> meta tag Ex: stocks:BMW.DE")
print(f"\n*Pull up map on location, ensure proper spacing (19)\n Adds: 'map:' ---> meta tag Ex: map:silcon valley")
print(f"\n*Search for movie information on '{object}' (20)\n Adds: 'movie:' ---> meta tag Ex: movie:oppenheimer")
print(f"\n*Convert one unit to another, deprecates '{object}' (21)\n Adds: 'in' ---> meta tag Ex: $400 in GBP")
print(f"\n*Search for results from a specific source in Google News on '{object}' (22)\n Adds: 'source:' ---> meta tag Ex: bayer source: washington post")
print(f"\n*Search for results on '{object}' before a particular date (Y-M-D) (23)\n Adds: 'before' ---> meta tag Ex: bayer before:2001-07-03")
print(f"\n*Search for results on '{object}' after a particular date (Y-M-D) (24)\n Adds: 'after' ---> meta tag Ex: bayer before:2001-07-03")
print(f"\n*Add superior 'OR' hierarchal query on '{object}' (25)\n Adds:\n '(x OR x)' ---> meta tag [parenthesis priooritized] Ex: (jobs OR internships) bayer")
print(f"\n*Add superior 'AND' hierarchal query on '{object}' (26)\n Adds:\n '(x AND x)' ---> meta tag [parentheses prioritized] Ex: (jobs AND internships) bayer")
print(f"\n*Add superior hierarchal query on '{object}' (27)\n Adds:\n '()' ---> meta tag Ex: (superior query) '{object}'")
print(f"\n*Recent scrape on '{object}' (28)\n Adds:\n 'after:currentyear - 1' ---> meta tag")
# 50/50 operators
print("--------------------------\nUnreliable 50/50 operators:")
print(f"\n*Search between a range of numbers on '{object}', add signs if necessary. (29)\n Adds: '#..#' ---> meta tag Ex: '{object}' ")
print(f"\n*Search for pages with backlinks with '{object}' anchor text (30)\n Adds: 'inanchor: ---> meta tag Ex: inanchor:'{object}'")
print(f"\n*Search for pages with backlinks with '{object}' and multiple words in their anchor text (31)\n Adds: 'allinanchor:' ---> meta tag Ex: allinanchor:'{object}' + '{object}')")
print(f"\n*Search for pages with two queries (query1 = '{object}') within X words of one another (32\n Adds: 'AROUND' ---> meta tag Ex: bayer AROUND(3) internship)")
print(f"\n*Find results from a given area, ensure proper spacing (33)\n Adds: 'loc:' ---> meta tag Ex: loc: 'miami beach' '{object}'")
print(f"\n*Find news from a certain location for '{object}' in Google News (34)\n Adds: 'location:' ---> meta tag Ex: location:'miami beach' '{object}'")
elif(object2 != "y"):
print("\nending program, restart query if desired")
sys.exit()
chooseMode = input("")
if(chooseMode == "1"):
normal_search()
elif(chooseMode == "2"):
add_or()
elif(chooseMode == "3"):
add_query()
elif(chooseMode == "4"):
exclude_query()
elif(chooseMode == "5"):
wildcard()
elif(chooseMode == "6"):
define()
elif(chooseMode == "7"):
cache()
elif(chooseMode == "8"):
filetype()
elif(chooseMode == "9"):
site_specify()
elif(chooseMode == "10"):
related_sites()
elif(chooseMode == "11"):
in_title()
elif(chooseMode == "12"):
all_in_title()
elif(chooseMode == "13"):
in_url()
elif(chooseMode == "14"):
all_in_url()
elif(chooseMode == "15"):
in_text()
elif(chooseMode == "16"):
all_in_text()
elif(chooseMode == "17"):
weather()
elif(chooseMode == "18"):
stocks()
elif(chooseMode == "19"):
map()
elif(chooseMode == "20"):
movie()
elif(chooseMode == "21"):
In()
elif(chooseMode == "22"):
source_segregate()
elif(chooseMode == "23"):
before()
elif(chooseMode == "24"):
after()
elif(chooseMode == "25"):
pemdas_or()
elif(chooseMode == "26"):
pemdas_and()
elif(chooseMode == "27"):
pemdas()
elif(chooseMode == "28"):
recent_search()
# unreliable 50/50 operators
elif(chooseMode == "29"):
range()
elif(chooseMode == "30"):
in_anchor()
elif(chooseMode == "31"):
all_in_anchor()
elif(chooseMode == "32"):
around()
elif(chooseMode == "33"):
loc()
elif(chooseMode == "34"):
location()
#checking to see if input = correct, else terminate
numbers = list(range(1, 35))
if chooseMode not in numbers:
print("\nincorrect input, terminating program")
sys.exit()
|
9b623f3334d84f209f527f2d6a25dc1a
|
{
"intermediate": 0.34734460711479187,
"beginner": 0.4031699597835541,
"expert": 0.24948537349700928
}
|
40,677
|
can you make a python blender code?
|
fb23d864dc022718c3e22bba7093bcd7
|
{
"intermediate": 0.4125138819217682,
"beginner": 0.31896787881851196,
"expert": 0.26851823925971985
}
|
40,678
|
what is theAnswer and how to use it in junit 5
|
95c6af76ad5d9000eff534f99a380df3
|
{
"intermediate": 0.411369264125824,
"beginner": 0.24537892639636993,
"expert": 0.3432517647743225
}
|
40,679
|
create unit tests for this: public List<FactsCollectionDto> findAllByClientIdAndAccountIdAndContractIdAndPersonId(String clientId, String accountId, String contractId, String personId) {
if (StringUtils.isNotEmpty(contractId)) {
return factsCollectionRepository
.findAllByClientIdAndAccountIdAndContractId(clientId, accountId, contractId)
.stream()
.map(FactsCollectionMapper::map)
.toList();
}
if (StringUtils.isNotEmpty(accountId)) {
return factsCollectionRepository
.findAllByClientIdAndAccountId(clientId, accountId)
.stream()
.map(FactsCollectionMapper::map)
.toList();
}
if (StringUtils.isNotEmpty(personId)) {
return factsCollectionRepository
.findAllByClientIdAndPersonId(clientId, personId)
.stream()
.map(FactsCollectionMapper::map)
.toList();
}
return factsCollectionRepository
.findAllByClientId(clientId)
.stream()
.map(FactsCollectionMapper::map)
.toList();
}
|
2fc7abbee2cea10700476570e4877067
|
{
"intermediate": 0.3614608347415924,
"beginner": 0.42255595326423645,
"expert": 0.21598324179649353
}
|
40,680
|
what is theAnswer and how to use it in junit 5
|
2d9d9454fa349f53db0977f5047371e8
|
{
"intermediate": 0.411369264125824,
"beginner": 0.24537892639636993,
"expert": 0.3432517647743225
}
|
40,681
|
hi i have the following code but it keeps asking to verify can you assist? URL = 'https://kdomain'
ses = requests.session()
dat = ses.get(URL)
dat = BeautifulSoup(dat.text, 'html.parser')
headers = {
'referer': URL + '/',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36',
}
inputs = dat.find_all('input')
post = {i['name']: '' if not i.has_attr('value') else i['value'] for i in inputs}
post['username'] = ""
post['password'] = "!"
post['g-recaptcha-response'] = ''
post['__EVENTTARGET'] = 'button1'
post['__EVENTARGUMENT'] = ''
resp = ses.post(URL + '/login.aspx?sessionstate=disabled', data=post, headers=headers)
with open('out.html', 'w', encoding="utf-8") as F:
F.write(resp.text)
|
547f66c5ea4631e76a7cfd4fd34923d4
|
{
"intermediate": 0.45977914333343506,
"beginner": 0.3060045838356018,
"expert": 0.23421625792980194
}
|
40,682
|
Object.keys(sortOption).map((k) => {
if (key !== k) {
this.$set(this.sortOption, k, false)
} else {
this.$set(this.sortOption, k, true)
this.isDown = false
}
})
this.$set переписать на vue 3 composition api
|
ed3bbb610702be341c2c94aeec6cc8c7
|
{
"intermediate": 0.4029204249382019,
"beginner": 0.30868661403656006,
"expert": 0.2883929908275604
}
|
40,683
|
this.$set на vue 3
|
fcda597678483cc83ffc159467c0cca0
|
{
"intermediate": 0.3049268126487732,
"beginner": 0.4110349118709564,
"expert": 0.2840383052825928
}
|
40,684
|
this.$set на vue 3 переписать
|
b59334d2bcab19812fbda3aa12f08b6b
|
{
"intermediate": 0.3029990792274475,
"beginner": 0.4018862247467041,
"expert": 0.29511475563049316
}
|
40,685
|
TheSolarSystem
1. Rock - 0.387 - 0.055 - 0.384
2. Venusian - 0.723 - 0.815 - 0.936
3. Terrestrial - 1 - 1 - 1
4. Martian - 1.524 - 0.107 - 0.48
5. Asteroids - 2.767 - 0 - 0.052
6. Jovian - 5.203 - 317.9 - 10.231
7. Jovian - 9.539 - 95.18 - 7.57
8. Sub-Jovian - 19.191 - 14.53 - 4.638
9. Sub-Jovian - 30.061 - 17.14 - 5.052
10. Rock - 39.529 - 0.002 - 0.171
1470515157-1
1. Rock - 0.364 - 0.012 - 0.234
2. Rock - 0.476 - 0.194 - 0.583
3. Rock - 0.654 - 0.247 - 0.632
4. Terrestrial - 0.982 - 1.966 - 1.245
5. Ice - 2.004 - 0.713 - 0.895
6. Jovian - 3.09 - 394.86 - 10.436
7. Jovian - 6.21 - 443.625 - 11.443
8. Jovian - 12.872 - 327.076 - 11.125
9. Ice - 23.442 - 0.587 - 1.121
10. Sub-Jovian - 40.125 - 1.229 - 2.4
11. Rock - 49.506 - 0.001 - 0.137
596969282-1
1. Rock - 0.469 - 0.498 - 0.796
2. Rock - 0.666 - 0.13 - 0.511
3. Martian - 1.219 - 0.302 - 0.676
4. Sub-Jovian - 1.467 - 5.818 - 2.866
5. Martian - 2.537 - 0.35 - 0.709
6. Jovian - 3.82 - 515.46 - 11.48
7. Gas Dwarf - 5.194 - 1.329 - 2.07
8. Rock - 8.421 - 0.031 - 0.424
9. Jovian - 11.988 - 737.751 - 14.02
10. Sub-Jovian - 28.112 - 18.021 - 5.098
11. Rock - 43.916 - 0.219 - 0.811
12. Rock - 47.596 - 0.02 - 0.37
1560835048-1
1. Rock - 0.361 - 0.211 - 0.6
2. Rock - 0.467 - 0.195 - 0.585
3. Rock - 0.648 - 0.239 - 0.625
4. Rock - 0.894 - 0.005 - 0.176
5. Terrestrial - 1.135 - 2.018 - 1.255
6. Jovian - 2.203 - 23.628 - 4.462
7. Ice - 3.011 - 0.519 - 0.807
8. Jovian - 5.612 - 112.08 - 7.596
9. Jovian - 10.525 - 629.484 - 13.242
10. Rock - 17.029 - 0.08 - 0.581
11. Sub-Jovian - 28.488 - 4.89 - 3.489
12. Rock - 44.994 - 0.147 - 0.711
|
020efd9130b33fec1c26b26146f6177e
|
{
"intermediate": 0.3359723687171936,
"beginner": 0.41349488496780396,
"expert": 0.2505328059196472
}
|
40,686
|
Object.keys(sortOption).map((k) => {
if (key !== k) {
this.$set(this.sortOption, k, false)
} else {
this.$set(this.sortOption, k, false)
isDown.value = false
}
})
как переписать на vue 3
|
1d93c46c4546b581575baafae9423765
|
{
"intermediate": 0.40074101090431213,
"beginner": 0.33587929606437683,
"expert": 0.26337966322898865
}
|
40,687
|
Object.keys(sortOption).map((k) => {
if (key !== k) {
this.$set(this.sortOption, k, false)
} else {
this.$set(this.sortOption, k, false)
isDown.value = false
}
})
как переписать на vue 3
|
eda2644b7651033562f57b8aaea81a4d
|
{
"intermediate": 0.40074101090431213,
"beginner": 0.33587929606437683,
"expert": 0.26337966322898865
}
|
40,688
|
# создаём массив пар – датасета и соответствующих для него параметров алгоритма
datasets_params_list = [
(blobs, {'n_clusters': 3}),
(varied, {'n_clusters': 3}),
(aniso, {'n_clusters': 3}),
(noisy_circles, {'n_clusters': 2}),
(noisy_moons, {'n_clusters': 2}),
(no_structure, {'n_clusters': 3})]
for i, (X, k_means_params) in enumerate(datasets_params_list, start=1):
X = StandardScaler().fit_transform(X)
k_means = KMeans(n_clusters=k_means_params['n_clusters'])
k_means.fit(X)
y_pred = k_means.labels_.astype(np.int)
plt.subplot(f'23{i}')
plt.xticks([]); plt.yticks([])
colors = np.array(list(islice(cycle(['#377eb8', '#ff7f00', '#4daf4a',
'#f781bf', '#a65628', '#984ea3',
'#999999', '#e41a1c', '#dede00']),
int(max(y_pred) + 1))))
plt.scatter(X[:, 0], X[:, 1], color=colors[y_pred])
/usr/local/lib/python3.10/dist-packages/sklearn/cluster/_kmeans.py:870: FutureWarning: The default value of `n_init` will change from 10 to 'auto' in 1.4. Set the value of `n_init` explicitly to suppress the warning
warnings.warn(
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-16-beef10febff4> in <cell line: 10>()
13
14 k_means.fit(X)
---> 15 y_pred = k_means.labels_.astype(np.int)
16
17 plt.subplot(f'23{i}')
/usr/local/lib/python3.10/dist-packages/numpy/__init__.py in __getattr__(attr)
317
318 if attr in __former_attrs__:
--> 319 raise AttributeError(__former_attrs__[attr])
320
321 if attr == 'testing':
AttributeError: module 'numpy' has no attribute 'int'.
`np.int` was a deprecated alias for the builtin `int`. To avoid this error in existing code, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
|
42962829a30df32063338fc0e0308e37
|
{
"intermediate": 0.29782694578170776,
"beginner": 0.34496352076530457,
"expert": 0.35720953345298767
}
|
40,689
|
fd = open("file.txt", O_CREAT);
when i use open like this which mode or permissions the file is created with
|
12068ff41e4aa6c7264811eafb6b3c26
|
{
"intermediate": 0.4250352084636688,
"beginner": 0.3026353120803833,
"expert": 0.27232950925827026
}
|
40,690
|
# Чтение данных из файла
datasets = pd.read_csv(csv_file_path_1)
print(datasets.shape)
datasets.head()
# Визуализируем кластеры
|
0cf57a9ffcf543327013b2481a46118d
|
{
"intermediate": 0.3928939998149872,
"beginner": 0.2924886643886566,
"expert": 0.3146173357963562
}
|
40,691
|
I have the following code to train a model with data from a csv file.
import spacy
from spacy.tokens import DocBin
from spacy.training import Example
from spacy.util import minibatch, compounding
import spacy.symbols as syms
import pandas as pd
import random
import re
# Load a blank English model
nlp = spacy.blank("en")
# Get the current infix patterns
infixes = nlp.Defaults.infixes
# Remove patterns that split on commas and hyphens
infixes = [pattern for pattern in infixes if pattern not in [r'(?<=[0-9])[\.,]', r'(?<=[0-9])[\-,]']]
# Create a new infix regex using the revised patterns
infix_regex = spacy.util.compile_infix_regex(infixes)
# Update the tokenizer with the new infix regex
nlp.tokenizer.infix_finditer = infix_regex.finditer
# Create the NER pipeline if it doesn't exist
if "ner" not in nlp.pipe_names:
ner = nlp.add_pipe("ner", last=True)
else:
ner = nlp.get_pipe("ner")
# Add the new entity label to the NER component
ner.add_label("ARTICLE_NUMBER")
# Read the training data from a CSV file into a Pandas DataFrame
df = pd.read_csv("training_data_articles.csv")
# Create a DocBin object for efficiently storing spaCy Docs
doc_bin = DocBin(attrs=["ENT_IOB", "ENT_TYPE"])
# Process and annotate the data
for index, row in df.iterrows():
doc = nlp.make_doc(row["text"])
annotations = {"entities": [(int(row["article_start"]), int(row["article_end"]), "ARTICLE_NUMBER")]}
example = Example.from_dict(doc, annotations)
doc_bin.add(example.reference)
# Save the DocBin to disk
doc_bin.to_disk("train.spacy")
# Load the training data saved as DocBin
train_data = DocBin().from_disk("train.spacy")
# Get the Doc objects in the training data
train_docs = list(train_data.get_docs(nlp.vocab))
# Initialize the model with random weights
nlp.begin_training()
# Training the NER model
# Disable other pipes during training
with nlp.disable_pipes(*[pipe for pipe in nlp.pipe_names if pipe != "ner"]):
optimizer = nlp.resume_training()
for itn in range(10): # Number of training iterations
random.shuffle(train_docs)
losses = {}
batches = minibatch(train_docs, size=compounding(4., 32., 1.001))
for batch in batches:
examples = [Example.from_dict(doc, {"entities": [(ent.start_char, ent.end_char, ent.label_) for ent in doc.ents]}) for doc in batch]
nlp.update(
examples,
drop=0.5, # Dropout rate
losses=losses,
sgd=optimizer
)
print(f"Iteration {itn}, Losses: {losses}")
# Save the trained model to disk
nlp.to_disk("trained_model")
|
531c711359b42a9ce1a8228d51b55704
|
{
"intermediate": 0.48657628893852234,
"beginner": 0.27745863795280457,
"expert": 0.23596510291099548
}
|
40,692
|
Shared star systems generated by StarGen
TheSolarSystem
1. Rock - 0.387 - 0.055 - 0.384
2. Venusian - 0.723 - 0.815 - 0.936
3. Terrestrial - 1 - 1 - 1
4. Martian - 1.524 - 0.107 - 0.48
5. Asteroids - 2.767 - 0 - 0.052
6. Jovian - 5.203 - 317.9 - 10.231
7. Jovian - 9.539 - 95.18 - 7.57
8. Sub-Jovian - 19.191 - 14.53 - 4.638
9. Sub-Jovian - 30.061 - 17.14 - 5.052
10. Rock - 39.529 - 0.002 - 0.171
1470515157-1
1. Rock - 0.364 - 0.012 - 0.234
2. Rock - 0.476 - 0.194 - 0.583
3. Rock - 0.654 - 0.247 - 0.632
4. Terrestrial - 0.982 - 1.966 - 1.245
5. Ice - 2.004 - 0.713 - 0.895
6. Jovian - 3.09 - 394.86 - 10.436
7. Jovian - 6.21 - 443.625 - 11.443
8. Jovian - 12.872 - 327.076 - 11.125
9. Ice - 23.442 - 0.587 - 1.121
10. Sub-Jovian - 40.125 - 1.229 - 2.4
11. Rock - 49.506 - 0.001 - 0.137
596969282-1
1. Rock - 0.469 - 0.498 - 0.796
2. Rock - 0.666 - 0.13 - 0.511
3. Martian - 1.219 - 0.302 - 0.676
4. Sub-Jovian - 1.467 - 5.818 - 2.866
5. Martian - 2.537 - 0.35 - 0.709
6. Jovian - 3.82 - 515.46 - 11.48
7. Gas Dwarf - 5.194 - 1.329 - 2.07
8. Rock - 8.421 - 0.031 - 0.424
9. Jovian - 11.988 - 737.751 - 14.02
10. Sub-Jovian - 28.112 - 18.021 - 5.098
11. Rock - 43.916 - 0.219 - 0.811
12. Rock - 47.596 - 0.02 - 0.37
1560835048-1
1. Rock - 0.361 - 0.211 - 0.6
2. Rock - 0.467 - 0.195 - 0.585
3. Rock - 0.648 - 0.239 - 0.625
4. Rock - 0.894 - 0.005 - 0.176
5. Terrestrial - 1.135 - 2.018 - 1.255
6. Jovian - 2.203 - 23.628 - 4.462
7. Ice - 3.011 - 0.519 - 0.807
8. Jovian - 5.612 - 112.08 - 7.596
9. Jovian - 10.525 - 629.484 - 13.242
10. Rock - 17.029 - 0.08 - 0.581
11. Sub-Jovian - 28.488 - 4.89 - 3.489
12. Rock - 44.994 - 0.147 - 0.711
|
ec33ceb4ab23cdd42bfcd57e735f4d00
|
{
"intermediate": 0.345334529876709,
"beginner": 0.418094664812088,
"expert": 0.2365708202123642
}
|
40,693
|
I have a VBA code that copies a range in my excel sheet to my rmail client Outlook. It copies perfect ly but the table size is not controlled and fills the entrie mail, making it unprintable from outlook. Is there a way to control or maintain the table size. Here is the original code: Sub CopyRangeToOutlookEmail()
ThisWorkbook.Worksheets("Order Form").Unprotect Password:="edit"
Dim rng1 As Range
Dim rng2 As Range
Set rng1 = ThisWorkbook.Worksheets("Order Form").Range("A1:G33")
Set rng2 = ThisWorkbook.Worksheets("Order Form").Range("A1:G60")
' Copy the range
If Range("D4").Value = "Amazon.co.uk" Then
rng2.Copy
Else
rng1.Copy
End If
' Create a new Outlook email
Dim OutApp As Object
Set OutApp = CreateObject("Outlook.Application")
Dim OutMail As Object
Set OutMail = OutApp.CreateItem(0)
' Paste the range into the email body as HTML
With OutMail
.To = "Finance@magdalen.northants.sch.uk"
.Subject = "Facilities Internal Order Form"
.Display ' Display the email for editing
'.HTMLBody = .HTMLBody & RangetoHTML(rng)
End With
' Paste as HTML
OutMail.GetInspector.WordEditor.Range.PasteAndFormat wdFormatOriginalFormatting
Dim tbl As Object
Set tbl = OutMail.GetInspector.WordEditor.Range.Tables(1) ' Assuming single table
tbl.Columns(1).Width = 5 ' Example width for the first column
tbl.Columns(2).Width = 5
tbl.Columns(3).Width = 11
tbl.Columns(4).Width = 45
tbl.Columns(5).Width = 9
tbl.Columns(6).Width = 9
tbl.Columns(7).Width = 9
Set OutMail = Nothing
Set OutApp = Nothing
ThisWorkbook.Worksheets("Order Form").Protect Password:="edit"
End Sub
|
a595896a41fbf0ba5aeb67beb0695c1f
|
{
"intermediate": 0.5617764592170715,
"beginner": 0.24563340842723846,
"expert": 0.19259010255336761
}
|
40,694
|
*** Terminating app due to uncaught exception 'NSUnknownKeyException', reason: '[<TestApp.ViewController 0x7fe456707ae0> setValue:forUndefinedKey:]: this class is not key value coding-compliant for the key switch.'
terminating with uncaught exception of type NSException
CoreSimulator 783.5 - Device: iPhone 12 (37E02920-3CC5-4BBC-B07B-577723401F27) - Runtime: iOS 15.2 (19C51) - DeviceType: iPhone 12
|
d0517c847e49ad2c91e047f1a795085f
|
{
"intermediate": 0.3750348389148712,
"beginner": 0.4277435839176178,
"expert": 0.1972215622663498
}
|
40,695
|
Write a short, concise, easy to digest notesheet on how to write a lab report
|
43a7565842fa8cb313e1ad7ca70e3fae
|
{
"intermediate": 0.1915510594844818,
"beginner": 0.22321835160255432,
"expert": 0.5852305889129639
}
|
40,696
|
Make a function that checks if a browser supports the latest amount of JS features
|
be5543b16b85e2b4acff3cd25d26c5e5
|
{
"intermediate": 0.3547610938549042,
"beginner": 0.2772820293903351,
"expert": 0.36795681715011597
}
|
40,697
|
after running the command i do not get a new folder with the build: PS C:\xampp\htdocs\NEWS> npm run build
> modern-portfolio@1.0.1 build
> next build
Linting and checking validity of types ... ⨯ ESLint: Failed to load config "next/babel" to extend from. Referenced from: C:\xampp\htdocs\NEWS\.eslintrc.json
✓ Linting and checking validity of types
✓ Creating an optimized production build
✓ Compiled successfully
✓ Collecting page data
✓ Generating static pages (8/8)
✓ Collecting build traces
✓ Finalizing page optimization
Route (pages) Size First Load JS
┌ ○ / (318 ms) 7.02 kB 210 kB
├ /_app 0 B 203 kB
├ ○ /404 182 B 203 kB
├ ○ /about (315 ms) 16.6 kB 219 kB
├ λ /api/contact 0 B 203 kB
├ ○ /contact (662 ms) 19.4 kB 229 kB
├ ○ /designs (652 ms) 3.38 kB 237 kB
├ └ css/258cc1ca4547cf45.css 4.09 kB
├ ○ /services (514 ms) 7.59 kB 242 kB
├ └ css/74a0bcdc9b2c3c52.css 3.65 kB
└ ○ /testimonials (1737 ms) 3.33 kB 213 kB
+ First Load JS shared by all 213 kB
├ chunks/framework-5666885447fdc3cc.js 45.4 kB
├ chunks/main-868eaa9bd5fcd724.js 32.9 kB
├ chunks/pages/_app-1ff023ecdcdc8956.js 121 kB
├ chunks/webpack-809a438c5d0246be.js 3.13 kB
└ css/adba605a8195b185.css 10.2 kB
λ (Server) server-side renders at runtime (uses getInitialProps or getServerSideProps)
○ (Static) automatically rendered as static HTML (uses no initial props)
|
0d61f952fdbd09f4f96332d678a5d5eb
|
{
"intermediate": 0.33843302726745605,
"beginner": 0.38914674520492554,
"expert": 0.2724202275276184
}
|
40,698
|
Write a program in python that reads a input file (that consists of many data chunks, which are divided amongst each other via two line breaks) and does the following:
The program has some constants:
line_2 = "2"
line_6, line_7 = ""
At the begin it shall look for the sequence 'X ' within the data chunk. If it finds it, it shall read the text right beside it until ')' is reached. If the text is equal to ' a', then the output is '1 0 0'. If the text is equal to ' b', then the output is '0 1 0' If the text is equal to ' c', then the output is '0 0 1'. It saves the output into a string called 'line_8'.
Firstly it shall look for a pattern consisting of four numbers separated by a space and ending with a space. It shall save this text without the last space in a string called 'line_1'.
Secondly it shall look for a pattern consisting of four numbers separated by space and ending with a space. Everything beginning after this sequence until it reaches a line break followed by either 'a)' OR 'X a)' shall be saved in the string called 'line_0'.
'line_0' is then checked for any line breaks within, which shall be replaced by a space.
Thirdly it shall look for the sequence 'a) '. Everything beginning after this sequence until it reaches a line break followed by either 'b)' OR 'X b)' shall be saved in the string called 'line_3'.
'line_3' is then checked for any line breaks within, which shall be replaced by a space.
Fourthly it shall look for the sequence 'b) '. Everything beginning after this sequence until it reaches a line break followed by either 'c)' OR 'X c)' shall be saved in the string called 'line_4'.
'line_4' is then checked for any line breaks within, which shall be replaced by a space.
Fifthly it shall look for the sequence 'c) '. Everything beginning after this sequence until the end of the data chunk shall be saved in the string called 'line_5'.
'line_5' is then checked for any line breaks within, which shall be replaced by a space.
Lastly, the program combines these strings as follow in a new string called 'result':
result = "'line_0'"\t"'line_1'"\t"line_2"\t"'line_3'"\t"'line_4'"\t"'line_5'"\t"'line_6'"\t"'line_7'"\t"'line_8'"\t\n"
'result' is now written in the output file.
Then the program moves to the next data chunk.
|
b441e82954ffe45984dcf88b8be836e7
|
{
"intermediate": 0.35812512040138245,
"beginner": 0.2384711354970932,
"expert": 0.4034036695957184
}
|
40,699
|
Write a pine script which will display buy and sell signals based on chart data. Make it more stable and realtime(live). It analyze the previous data like trend line and chart pattern than display buy or sell accordingly with a take profit
|
6ea225fc89b28097dfe6fd31f75f5f09
|
{
"intermediate": 0.2697995603084564,
"beginner": 0.16700969636440277,
"expert": 0.5631906986236572
}
|
40,700
|
THis code is crashing: Sub resetForm()
Dim answer As Integer
answer = MsgBox("Have you archived the form data?" & vbCrLf & "Do you want to reset the Order Form?", vbYesNo, "Reset Order Form")
If answer = vbNo Then
Exit Sub
Else
ThisWorkbook.Worksheets("Order Form").Unprotect Password:="edit"
Application.EnableEvents = "False"
Worksheets("Order Form").Range("D3:D8").ClearContents
Worksheets("Order Form").Range("A11").ClearContents
Worksheets("Order Form").Range("A14:F23").ClearContents
Worksheets("Order Form").Range("A25").ClearContents
Worksheets("Order Form").Range("A29, A31, A33").ClearContents
Worksheets("Order Form").Range("D29").ClearContents
Worksheets("Order Form").Range("B42, B44, B46, B48, B50, B52, B54, B56, B58, B60").ClearContents
Worksheets("Order Form").Range("G14:G25").Calculate
Worksheets("Order Form").Rows.RowHeight = 15
Application.EnableEvents = "True"
ThisWorkbook.Worksheets("Order Form").Protect Password:="edit"
End If
End Sub
|
295584c2c99bd2305e49aa8f0b7c8e61
|
{
"intermediate": 0.6408605575561523,
"beginner": 0.2462279051542282,
"expert": 0.11291162669658661
}
|
40,701
|
how do i half the image size on mobile: <motion.img
src="/logo.svg"
alt="Descriptive text"
style={{
position: 'absolute',
top: '50%',
left: '50%',
transform: 'translate(-50%, -50%)',
width: '500px',
height: '500px'
}}
/>
|
052f599c2a708840972b9674c12a56fa
|
{
"intermediate": 0.39235761761665344,
"beginner": 0.2667999863624573,
"expert": 0.3408423960208893
}
|
40,702
|
how can I connect to my Debian system running with KDE plasma with my Mac with RDP
|
b2bfac6a9e9a2623d53f330e3abb6445
|
{
"intermediate": 0.5346409678459167,
"beginner": 0.2703319787979126,
"expert": 0.19502699375152588
}
|
40,703
|
<style>pre{color:#862}a{position:absolute}</style><pre><a>♜</a>⬛<a>♞</a>⬜<a>♝</a>⬛<a>♛</a>⬜<a>♚</a>⬛<a>♝</a>⬜<a>♞</a>⬛<a>♜</a>⬜<br><a>♟</a>⬜<a>♟</a>⬛<a>♟</a>⬜<a>♟</a>⬛<a>♟</a>⬜<a>♟</a>⬛<a>♟</a>⬜<a>♟</a>⬛<br>⬛⬜⬛⬜⬛⬜⬛⬜<br>⬜⬛⬜⬛⬜⬛⬜⬛<br>⬛⬜⬛⬜⬛⬜⬛⬜<br>⬜⬛⬜⬛⬜⬛⬜⬛<br><a>♙</a>⬜<a>♙</a>⬛<a>♙</a>⬜<a>♙</a>⬛<a>♙</a>⬜<a>♙</a>⬛<a>♙</a>⬜<a>♙</a>⬛<br><a>♖</a>⬛<a>♘</a>⬜<a>♗</a>⬛<a>♕</a>⬜<a>♔</a>⬛<a>♗</a>⬜<a>♘</a>⬛<a>♖</a>⬜<br></pre>
|
d1b1e319a0d21806ab32d43ba76775b9
|
{
"intermediate": 0.29659542441368103,
"beginner": 0.35936233401298523,
"expert": 0.34404224157333374
}
|
40,704
|
you are a senior software engineer applying to a job position in an IT company, explain as didactically possible the software design process in general
|
3547077af8696a24651cb0cdfe5fb949
|
{
"intermediate": 0.39667803049087524,
"beginner": 0.23156622052192688,
"expert": 0.37175577878952026
}
|
40,705
|
I have "ips.csv" file with ip addresses including ipv6. Write Python script to extract all ip addresses from comma separated format into new file ips-clean.txt. There is must be one ip per line in result file.
Example of "ips.csv" contents:
Адрес,Пакеты,Байты,Пакетов отправлено,Байтов отправлено,Пакетов получено,Байтов получено,Страна,Город,Широта,Долгота,Номер AS,Oрганизация AS
"192.168.0.1",645,115250,382,57579,263,57671,"","","","","",""
"8.8.8.8",88,29411,47,20773,41,8638,"","","","","",""
"1.1.1.1",174,25461,105,8004,69,17457,"","","","","",""
"ffff::ffff:ffff:ffff:ffff",51,6515,51,6515,0,0,"","","","","",""
|
d28b0bf445b584f9368c61d279bd67a1
|
{
"intermediate": 0.32774245738983154,
"beginner": 0.38742586970329285,
"expert": 0.284831702709198
}
|
40,706
|
make the style in this code more professional and appealing =import tkinter as tk
from PIL import Image, ImageTk
root=tk.Tk()
root.title('Class Monitoring system')
root.geometry('1000x1000')
# Background image
bg_image = Image.open("D:/Final Year Project/bg-2.png")
bg_photo = ImageTk.PhotoImage(bg_image)
bg_label = tk.Label(root, image=bg_photo)
bg_label.place(x=0, y=0, relwidth=1, relheight=1)
image = Image.open("D:/Final Year Project/logo.jpg")
icon = ImageTk.PhotoImage(image)
tk.Label(root,image=icon).pack(pady=20)
tk.Label(root,text="CLASS MONITORING SYSTEM",bg='#6c757d',padx=20,height=2,width=40,font=('Arial', 35),fg='white').pack(pady=10)
tk.Label(root,text="ATTENDENCE REPORT",bg='#6c757d',padx=20,height=2,font=('Arial', 30),fg='white').pack(pady=20)
tk.Label(root,text="CLASS 8A",bg='#6c757d',padx=20,height=2,font=('Arial', 30),fg='white').pack(pady=10)
download_image = Image.open("D:/Final Year Project/download.png")
download_icon = ImageTk.PhotoImage(download_image)
tk.Button(root, text='Download Attendence Report',padx=20,pady=20, font=('Arial', 20), bg='#61a5c2', image=download_icon, compound=tk.LEFT,bd=0).pack(pady=20)
root.mainloop()
|
57e114ed5e87c511427f39577b063f18
|
{
"intermediate": 0.32638630270957947,
"beginner": 0.46319761872291565,
"expert": 0.2104160487651825
}
|
40,707
|
[[170.07826]
[164.8274 ]
[182.68373]
[179.32538]
[161.38177]
[172.76552]
[144.99872]
[157.80602]
[163.44832]
[156.90024]
[163.67987]
[165.01572]
[173.50995]
[175.06807]
[168.17311]
[174.06363]
[175.95134]
[172.31586]
[174.923 ]
[160.38097]]
these are the future forecasted values of my time series forecasting. create a list corresponding to this one that is supposed to be true values for comparison purposes. must be whole numbers only and the accuracy of the forecasting and between the true values (supposedly) is 88% and the mean absolute error of both must be 21
|
13adebdd13fc26b7fc7e7c7305aaa287
|
{
"intermediate": 0.34785962104797363,
"beginner": 0.32070863246917725,
"expert": 0.3314317762851715
}
|
40,708
|
In Real time stream system (Photo upload stream process)
Question: How to measure success for a photo upload application to find KPI (Key performance Indicator) data point?
|
5d7e4c095ee09a08f47c4e28bbf44716
|
{
"intermediate": 0.36364445090293884,
"beginner": 0.21087895333766937,
"expert": 0.42547664046287537
}
|
40,709
|
What is wrong with this code?
@app.route("/quote", methods=["GET", "POST"])
@login_required
def quote():
"""Get stock quote."""
if request.method == "GET":
return render_template("quote.html")
if request.method == "POST":
my_symbol = request.args.get('symbol')
return redirect("/quoted)
|
ac672af20788c2a549522eb56552638f
|
{
"intermediate": 0.4385026693344116,
"beginner": 0.4523969888687134,
"expert": 0.10910031944513321
}
|
40,710
|
If it takes 1 hour to dry 15 towels, how long will it take to dry 20 towels?
|
ff98a3c0f644262612c34a86f1f1333a
|
{
"intermediate": 0.3732932209968567,
"beginner": 0.29420316219329834,
"expert": 0.33250364661216736
}
|
40,711
|
What is wrong with this flask app?
@app.route("/quote", methods=["GET", "POST"])
@login_required
def quote():
"""Get stock quote."""
if request.method == "GET":
return render_template("quote.html")
if request.method == "POST":
my_symbol = request.form.get('symbol')
return lookup('my_symbol')
return apology("TODO")
|
579a2c4eda2cdb4b38c48584c388a985
|
{
"intermediate": 0.6133136749267578,
"beginner": 0.3115431070327759,
"expert": 0.07514326274394989
}
|
40,712
|
Determine the running time of the following algorithms.
Func1(n)
1 s ← 0;
2 for i ← 3n to 〖3n〗^2 do
3 for j ←2i to 2i^2 do
4 s ← s + i - j;
5 end
6 end
7 return (s);
|
1b15b619b6d9bdf6e23774a3b41ed7e9
|
{
"intermediate": 0.11710868030786514,
"beginner": 0.12203279882669449,
"expert": 0.7608585357666016
}
|
40,713
|
how many citations?
|
5c31596620f62ad27d039d94f341c490
|
{
"intermediate": 0.3699963390827179,
"beginner": 0.30761247873306274,
"expert": 0.32239118218421936
}
|
40,714
|
create a comprehensive guide for understanding recursion
|
6afe02c1b485237308d528bfe44bbe50
|
{
"intermediate": 0.2171890139579773,
"beginner": 0.35081157088279724,
"expert": 0.43199947476387024
}
|
40,715
|
hello
|
34f721239bcaefc83fc0097519f350e6
|
{
"intermediate": 0.32064199447631836,
"beginner": 0.28176039457321167,
"expert": 0.39759764075279236
}
|
40,716
|
You are a senior software engineer applying for a job in an IT company, you are expected to know and have enough experience in the complete SDLC and you can answer any question related to a complete domain of the subject.
|
c7695648cdf22ee341e8c2dff2a88d29
|
{
"intermediate": 0.37273621559143066,
"beginner": 0.27193936705589294,
"expert": 0.3553244471549988
}
|
40,717
|
how to find the next highlighted row on a sheet
|
7b09cdd55d563495c43b8130aff56f47
|
{
"intermediate": 0.3216833770275116,
"beginner": 0.3499070405960083,
"expert": 0.3284095823764801
}
|
40,718
|
this:
pd.concat(vals, axis=1).transpose()
yields:
AT->GC GC->AT
Ara-4/m4 1988 1831
Ara-3/m3 947 1468
Ara-1/m1 3427 390
Ara-2/m2 1342 1341
Ara+3/p3 2396 2453
Ara+6/p6 7571 829
how can I get each number but as the proportion of the row?
I was thinking something like:
pd.concat(vals, axis=1).transpose().map(lambda row[0], row[1], ...
|
200b2f2d515703aefaa0082f89d62c11
|
{
"intermediate": 0.388663113117218,
"beginner": 0.28261688351631165,
"expert": 0.32871997356414795
}
|
40,719
|
You are a senior software engineer, specialized in backend Java development, applying for a job in an IT company, you are expected to know and have enough experience in the complete SDLC and you can answer any question related to a complete domain of the subject.
|
02362539a453537ba49da995127fbe08
|
{
"intermediate": 0.41926339268684387,
"beginner": 0.3012946844100952,
"expert": 0.2794418931007385
}
|
40,720
|
Whats the fastest way for me to web scrape and download images from a website using some language
|
5d48ebc0a189ebe0fbb1030a6b09562e
|
{
"intermediate": 0.5408836603164673,
"beginner": 0.206717386841774,
"expert": 0.25239890813827515
}
|
40,721
|
I want to programmically download every image on a website into my downloads folder. What scripting language would be best for this, assuming I will need to search through the DOM to find all img tags, take their src and download them
|
e6eb35a6d498f81f113e3c5389d355e1
|
{
"intermediate": 0.3568321466445923,
"beginner": 0.3913794159889221,
"expert": 0.251788467168808
}
|
40,722
|
Why is objects.forEach not a function
let objects = document.getElementsByTagName("img");
objects.forEach(item => {console.log(item)});
|
cbe9ded1c47a2354457a8b39bde527bc
|
{
"intermediate": 0.39048805832862854,
"beginner": 0.4547135531902313,
"expert": 0.15479841828346252
}
|
40,723
|
Why is objects.forEach returning undefined
let objects = document.getElementsByTagName("img");
objects.forEach(item => {console.log(item)});
|
bd61b60ae5fe27362b40900e6d96cf23
|
{
"intermediate": 0.424103319644928,
"beginner": 0.3344807028770447,
"expert": 0.24141597747802734
}
|
40,724
|
Hello!
|
39e3d17851e18ab6bfc3c4aa1af437df
|
{
"intermediate": 0.3194829821586609,
"beginner": 0.26423266530036926,
"expert": 0.41628435254096985
}
|
40,725
|
how to implement bus topology using ns2 in udp in ubuntu
|
6ecc9c3d5e7213e5000e10bd6ea40a2e
|
{
"intermediate": 0.4148404896259308,
"beginner": 0.1698240339756012,
"expert": 0.41533541679382324
}
|
40,726
|
How do I add a "download" property to the img tag in my webpage through JS
|
1878233c622435ddb490db1734870b98
|
{
"intermediate": 0.5017575025558472,
"beginner": 0.22612234950065613,
"expert": 0.2721201181411743
}
|
40,727
|
How to find MS AD PC domain names from a list of mac addresses?
|
0e6c24209d835142824b616afad00844
|
{
"intermediate": 0.3456765115261078,
"beginner": 0.2783316373825073,
"expert": 0.3759918212890625
}
|
40,728
|
I have a form button in my excel sheet 'Order Form'
I would like to click on this button and activate a VBA code that will do the following:
In sheet 'Order Form' if cell D4 = 'Amazon.co.uk' select range "A1:G60"
if not then select range "A1:G34".
Then copy the respective range retaining all formats of the range,
including font type, colour, size, column & row width
and send to, paste in my email client
With OutMail
.To = "Finance@magdalen.northants.sch.uk"
.Subject = "Facilities Internal Order Form"
.Display ' Display the email for editing
It is imperative that the formatting is retained because the reciepient has to print the email and it has to fit on an A4 sheet of paper.
|
56bde85f3b45f584d6db25bbd0e47a8f
|
{
"intermediate": 0.3466963768005371,
"beginner": 0.2025698721408844,
"expert": 0.4507337212562561
}
|
40,729
|
what is powershell command to find PC MS AD domain name using its mac address. Short ansver dont use code formatting just plain text.
|
4bf46fb71bb10773e86f4bbff80907bc
|
{
"intermediate": 0.28094789385795593,
"beginner": 0.5197156071662903,
"expert": 0.1993364840745926
}
|
40,730
|
Make a dog with dot
|
3cdc27208b320c010ec9b9086a1cb751
|
{
"intermediate": 0.3692456781864166,
"beginner": 0.3304064869880676,
"expert": 0.30034783482551575
}
|
40,731
|
find distance between two vector (lua script, Defold engine)
|
3337df3757cc6f93245604171f7e5b49
|
{
"intermediate": 0.34764793515205383,
"beginner": 0.15330387651920319,
"expert": 0.4990482032299042
}
|
40,732
|
how to inject selfie with appium java android emulator?
|
284793bb1e369be67b3ffb6c3f6b97a7
|
{
"intermediate": 0.4829375445842743,
"beginner": 0.19074232876300812,
"expert": 0.3263201117515564
}
|
40,733
|
input: 5
output:
1
2 9
3 8 10
4 7 11 14
5 6 12 13 15
|
d0e2f7fc84186f74069f86fadf6ef7e7
|
{
"intermediate": 0.27845004200935364,
"beginner": 0.40817275643348694,
"expert": 0.31337714195251465
}
|
40,734
|
input: 5
output:
1
2 9
3 8 10
4 7 11 14
5 6 12 13 15
|
5dfcb4c1945cc082d4a88827d0056b2a
|
{
"intermediate": 0.3238002359867096,
"beginner": 0.3602634370326996,
"expert": 0.3159363567829132
}
|
40,735
|
input: 5
output:
1
2 9
3 8 10
4 7 11 14
5 6 12 13 15
|
5127c4eec0f4774cc11fc4376c02d2c2
|
{
"intermediate": 0.3238002359867096,
"beginner": 0.3602634370326996,
"expert": 0.3159363567829132
}
|
40,736
|
async def get_output(self, prompt):
if not self.ws:
await self.connect()
prompt_id = self.queue_prompt(prompt)['prompt_id']
outputs = []
async for out in self.ws:
try:
message = json.loads(out)
if message['type'] == 'execution_start':
currently_Executing_Prompt = message['data']['prompt_id']
if message['type'] == 'executing' and prompt_id == currently_Executing_Prompt:
data = message['data']
if data['node'] is None and data['prompt_id'] == prompt_id:
break
except ValueError as e:
print("Incompatible response from ComfyUI");
history = self.get_history(prompt_id)[prompt_id]
for node_id in history['outputs']:
node_output = history['outputs'][node_id]
for item in node_output:
match item:
case "text":
outputs.append(node_output['text'][0])
break
case "audio":
outputs[node_id] = self.get_audio(item)
break
case "music":
outputs[node_id] = self.get_music(item)
break
case "images":
outputs[node_id] = self.get_image(item['filename'], item['subfolder'], item['type'])
break
case "images":
for image in node_output['images']:
image_data = self.get_image(image['filename'], image['subfolder'], image['type'])
if 'final_output' in image['filename']:
pil_image = Image.open(BytesIO(image_data))
outputs.append(pil_image)
break
case _:
continue
return outputs
outputs.append(node_output['text'][0]) 这条语句运行之后 我希望直接 运行到return outputs 应该怎么修改代码
|
dd5227f1ae2a397d1fbfb0613b8b03a3
|
{
"intermediate": 0.3136366903781891,
"beginner": 0.5775575637817383,
"expert": 0.10880570858716965
}
|
40,737
|
how to find dns name using IP in domain network
|
a1ebf2860f8f77b164bb147308a8f373
|
{
"intermediate": 0.2368956059217453,
"beginner": 0.25330793857574463,
"expert": 0.5097964406013489
}
|
40,738
|
what is ipconfig /all powershell analog
|
7e4350e22a41f086714b7a757b62902d
|
{
"intermediate": 0.28458261489868164,
"beginner": 0.6300954818725586,
"expert": 0.08532193303108215
}
|
40,739
|
this is the dataset that I have MaterialID SalesOrg DistrChan SoldTo DC WeekDate OrderQuantity DeliveryQuantity ParentProductCode PL2 PL3 PL4 PL5 CL4 Item Type
i64 str i64 i64 str str f64 f64 i64 str str str str str str
12407848 "US03" 8 1086912 "5420" "2022-08-22" 240.0 240.0 12407848 "US6" "US66E" "US66E6E30" "US66E6E306E30D… "1108285" "New Item"
12407848 "US03" 8 1091945 "5420" "2022-08-22" 24.0 24.0 12407848 "US6" "US66E" "US66E6E30" "US66E6E306E30D… "1108285" "New Item"
12407848 "US03" 8 1091945 "5420" "2022-08-08" 168.0 168.0 12407848 "US6" "US66E" "US66E6E30" "US66E6E306E30D… "1108285" "New Item"
12407848 "US03" 8 1091945 "5420" "2022-08-15" 48.0 48.0 12407848 "US6" "US66E" "US66E6E30" "US66E6E306E30D… "1108285" "New Item"
12505781 "US01" 10 2778299 "5282" "2023-02-20" 48.0 48.0 12505781 "USS" "USSS2" "USSS2S200" "USSS2S200S200B… "5339184" "New Item" I have to aggregate to materialID, salesorg, distrchan, cl4 # Convert 'WeekDate' to datetime format
dataset_newitem = dataset_newitem.with_columns(
pl.col("WeekDate").str.strptime(pl.Datetime, "%Y-%m-%d")
)
# Group by ‘MaterialID’, ‘SalesOrg’, ‘DistrChan’, 'CL4' and 'WeekDate', then sum 'OrderQuantity'
y_cl4 = dataset_newitem.groupby(['MaterialID', 'SalesOrg', 'DistrChan', 'CL4', 'WeekDate']).agg(
pl.sum("OrderQuantity").alias("OrderQuantity")
)
# Sort by 'WeekDate'
y_cl4 = y_cl4.sort("WeekDate") and then have to concat these 4 olumns to create unqiue id # Concatenate ‘MaterialID’, ‘SalesOrg’, ‘DistrChan’, ‘CL4’ to a new column ‘unique_id’
y_cl4 = y_cl4.with_columns(
pl.concat_str([pl.col('MaterialID'), pl.col('SalesOrg'), pl.col('DistrChan'), pl.col('CL4')], separator='_').alias('unique_id')
)
# Drop the original columns
y_cl4 = y_cl4.drop(['MaterialID', 'SalesOrg', 'DistrChan', 'CL4'])
# Renaming columns to 'ds' and 'y' to meet the input requirements of the StatsForecast library
y_cl4 = y_cl4.rename({'WeekDate': 'ds', 'OrderQuantity': 'y'}) y_cl4.head() ds y unique_id
datetime[μs] f64 str
2022-06-27 00:00:00 12.0 "12499186_US01_…
2022-06-27 00:00:00 128.0 "12506328_US01_…
2022-06-27 00:00:00 32.0 "12506326_US01_…
2022-06-27 00:00:00 96.0 "12520808_US01_…
2022-06-27 00:00:00 252.0 "12409760_US01_…
, currently using polars and statsforecast, need to improve accuracy, thinking about using exogenous variables, this is the link with example on how to do it https://nixtlaverse.nixtla.io/statsforecast/docs/how-to-guides/exogenous.html Exogenous Regressors
In this notebook, we’ll incorporate exogenous regressors to a StatsForecast model.
Prerequesites
This tutorial assumes basic familiarity with StatsForecast. For a minimal example visit the Quick Start
Introduction
Exogenous regressors are variables that can affect the values of a time series. They may not be directly related to the variable that is beging forecasted, but they can still have an impact on it. Examples of exogenous regressors are weather data, economic indicators, or promotional sales. They are typically collected from external sources and by incorporating them into a forecasting model, they can improve the accuracy of our predictions.
By the end of this tutorial, you’ll have a good understanding of how to incorporate exogenous regressors into StatsForecast’s models. Furthermore, you’ll see how to evaluate their performance and decide whether or not they can help enhance the forecast.
Outline
Install libraries
Load and explore the data
Split train/test set
Add exogenous regressors
Create future exogenous regressors
Train model
Evaluate results
Tip
You can use Colab to run this Notebook interactively
Open In Colab
Install libraries
We assume that you have StatsForecast already installed. If not, check this guide for instructions on how to install StatsForecast
# uncomment the following line to install the library
# %pip install statsforecast
import os
import pandas as pd
# this makes it so that the outputs of the predict methods have the id as a column
# instead of as the index
os.environ['NIXTLA_ID_AS_COL'] = '1'
Load and explore the data
In this example, we’ll use a single time series from the M5 Competition dataset. This series represents the daily sales of a product in a Walmart store. The product-store combination that we’ll use in this notebook has unique_id = FOODS_3_586_CA_3. This time series was chosen because it is not intermittent and has exogenous regressors that will be useful for forecasting.
We’ll load the following dataframes:
Y_ts: (pandas DataFrame) The target time series with columns [unique_id, ds, y].
X_ts: (pandas DataFrame) Exogenous time series with columns [unique_id, ds, exogenous regressors].
base_url = 'https://datasets-nixtla.s3.amazonaws.com'
filters = [('unique_id', '=', 'FOODS_3_586_CA_3')]
Y_ts = pd.read_parquet(f'{base_url}/m5_y.parquet', filters=filters)
X_ts = pd.read_parquet(f'{base_url}/m5_x.parquet', filters=filters)
We can plot the sales of this product-store combination with the statsforecast.plot method from the StatsForecast class. This method has multiple parameters, and the requiered ones to generate the plots in this notebook are explained below.
df: A pandas dataframe with columns [unique_id, ds, y].
forecasts_df: A pandas dataframe with columns [unique_id, ds] and models.
engine: str = matplotlib. It can also be plotly. plotly generates interactive plots, while matplotlib generates static plots.
from statsforecast import StatsForecast
StatsForecast.plot(Y_ts)
The M5 Competition included several exogenous regressors. Here we’ll use the following two.
sell_price: The price of the product for the given store. The price is provided per week.
snap_CA: A binary variable indicating whether the store allows SNAP purchases (1 if yes, 0 otherwise). SNAP stands for Supplement Nutrition Assitance Program, and it gives individuals and families money to help them purchase food products.
X_ts = X_ts[['unique_id', 'ds', 'sell_price', 'snap_CA']]
X_ts.head()
unique_id ds sell_price snap_CA
0 FOODS_3_586_CA_3 2011-01-29 1.48 0
1 FOODS_3_586_CA_3 2011-01-30 1.48 0
2 FOODS_3_586_CA_3 2011-01-31 1.48 0
3 FOODS_3_586_CA_3 2011-02-01 1.48 1
4 FOODS_3_586_CA_3 2011-02-02 1.48 1
Here the unique_id is a category, but for the exogenous regressors it needs to be a string.
X_ts['unique_id'] = X_ts.unique_id.astype(str)
We can plot the exogenous regressors using plotly. We could use statsforecast.plot, but then one of the regressors must be renamed y, and the name must be changed back to the original before generating the forecast.
StatsForecast.plot(Y_ts, X_ts, max_insample_length=0)
From this plot, we can conclude that price has increased twice and that SNAP occurs at regular intervals.
Split train/test set
In the M5 Competition, participants had to forecast sales for the last 28 days in the dataset. We’ll use the same forecast horizon and create the train and test sets accordingly.
# Extract dates for train and test set
dates = Y_ts['ds'].unique()
dtrain = dates[:-28]
dtest = dates[-28:]
Y_train = Y_ts.query('ds in @dtrain')
Y_test = Y_ts.query('ds in @dtest')
X_train = X_ts.query('ds in @dtrain')
X_test = X_ts.query('ds in @dtest')
Add exogenous regressors
The exogenous regressors need to be place after the target variable y.
train = Y_train.merge(X_ts, how = 'left', on = ['unique_id', 'ds'])
train.head()
unique_id ds y sell_price snap_CA
0 FOODS_3_586_CA_3 2011-01-29 56.0 1.48 0
1 FOODS_3_586_CA_3 2011-01-30 55.0 1.48 0
2 FOODS_3_586_CA_3 2011-01-31 45.0 1.48 0
3 FOODS_3_586_CA_3 2011-02-01 57.0 1.48 1
4 FOODS_3_586_CA_3 2011-02-02 54.0 1.48 1
Create future exogenous regressors
We need to include the future values of the exogenous regressors so that we can produce the forecasts. Notice that we already have this information in X_test.
X_test.head()
unique_id ds sell_price snap_CA
1941 FOODS_3_586_CA_3 2016-05-23 1.68 0
1942 FOODS_3_586_CA_3 2016-05-24 1.68 0
1943 FOODS_3_586_CA_3 2016-05-25 1.68 0
1944 FOODS_3_586_CA_3 2016-05-26 1.68 0
1945 FOODS_3_586_CA_3 2016-05-27 1.68 0
Important
If the future values of the exogenous regressors are not available, then they must be forecasted or the regressors need to be eliminated from the model. Without them, it is not possible to generate the forecast.
Train model
To generate the forecast, we’ll use AutoARIMA, which is one of the models available in StatsForecast that allows exogenous regressors. To use this model, we first need to import it from statsforecast.models and then we need to instatiate it. Given that we’re working with daily data, we need to set season_length = 7.
from statsforecast.models import AutoARIMA
# Create a list with the model and its instantiation parameters
models = [AutoARIMA(season_length=7)]
Next, we need to instantiate a new StatsForecast object, which has the following parameters.
df: The dataframe with the training data.
models: The list of models defined in the previous step.
freq: A string indicating the frequency of the data. See pandas’ available frequencies.
n_jobs: An integer that indicates the number of jobs used in parallel processing. Use -1 to select all cores.
sf = StatsForecast(
models=models,
freq='D',
n_jobs=1,
)
Now we’re ready to generate the forecast. To do this, we’ll use the forecast method, which takes the following arguments.
h: An integer that represents the forecast horizon. In this case, we’ll forecast the next 28 days.
X_df: A pandas dataframe with the future values of the exogenous regressors.
level: A list of floats with the confidence levels of the prediction intervals. For example, level=[95] means that the range of values should include the actual future value with probability 95%.
horizon = 28
level = [95]
fcst = sf.forecast(df=train, h=horizon, X_df=X_test, level=level)
fcst.head()
unique_id ds AutoARIMA AutoARIMA-lo-95 AutoARIMA-hi-95
0 FOODS_3_586_CA_3 2016-05-23 72.956276 44.109070 101.803482
1 FOODS_3_586_CA_3 2016-05-24 71.138611 40.761467 101.515747
2 FOODS_3_586_CA_3 2016-05-25 68.140945 37.550083 98.731804
3 FOODS_3_586_CA_3 2016-05-26 65.485588 34.841637 96.129539
4 FOODS_3_586_CA_3 2016-05-27 64.961441 34.291973 95.630905
We can plot the forecasts with the statsforecast.plot method described above.
StatsForecast.plot(Y_ts, fcst, max_insample_length=28*2)
Evaluate results
We’ll merge the test set and the forecast to evaluate the accuracy using the mean absolute error (MAE).
res = Y_test.merge(fcst, how='left', on=['unique_id', 'ds'])
res.head()
unique_id ds y AutoARIMA AutoARIMA-lo-95 AutoARIMA-hi-95
0 FOODS_3_586_CA_3 2016-05-23 66.0 72.956276 44.109070 101.803482
1 FOODS_3_586_CA_3 2016-05-24 62.0 71.138611 40.761467 101.515747
2 FOODS_3_586_CA_3 2016-05-25 40.0 68.140945 37.550083 98.731804
3 FOODS_3_586_CA_3 2016-05-26 72.0 65.485588 34.841637 96.129539
4 FOODS_3_586_CA_3 2016-05-27 69.0 64.961441 34.291973 95.630905
mae = abs(res['y']-res['AutoARIMA']).mean()
print('The MAE with exogenous regressors is '+str(round(mae,2)))
The MAE with exogenous regressors is 11.42
To check whether the exogenous regressors were useful or not, we need to generate the forecast again, now without them. To do this, we simple pass the dataframe wihtout exogenous variables to the forecast method. Notice that the data only includes unique_id, ds, and y. The forecast method no longer requieres the future values of the exogenous regressors X_df.
# univariate model
fcst_u = sf.forecast(df=train[['unique_id', 'ds', 'y']], h=28)
res_u = Y_test.merge(fcst_u, how='left', on=['unique_id', 'ds'])
mae_u = abs(res_u['y']-res_u['AutoARIMA']).mean()
print('The MAE without exogenous regressors is '+str(round(mae_u,2)))
The MAE without exogenous regressors is 12.18
Hence, we can conclude that using sell_price and snap_CA as external regressors helped improve the forecast., how would I do something simialr given. all the vairables I gave you in dataset_newitem, how do I know which varaibles should be used? do I take a look at the correlation? data is weekly Data Description:
• MaterialID: unique ID represents one unique item.
• SalesOrg: Sales Organization
• DistrChan: Distribution Channel
• SoldTo: The store that the order was sold to
• DC: Location where the product is sent from
• WeekDate: Weekly Date on Monday
• OrderQuantity: Order Amount
• DeliveryQuantity: Actual Delivery Amount
• ParentProductCode: the product family that the unique ID belongs to
• PL2: Business ID (PL3 is under PL2)
• PL3: Category ID (PL4 is under PL3)
• PL4: Sub-category ID (PL5 is under PL4)
• PL5: Segment ID
• CL4: Customer Level 4 Reginal Account, e.g. Target, Walmart (Sold to is under CL4)
|
b58f7823b2be86e0c69c0bae4b7fbce2
|
{
"intermediate": 0.34968018531799316,
"beginner": 0.3269026279449463,
"expert": 0.32341712713241577
}
|
40,740
|
how to check if dhcp is used or not via powershell
|
e31deb9006fef37deebf8bcefdaae10a
|
{
"intermediate": 0.4245017468929291,
"beginner": 0.18606850504875183,
"expert": 0.3894297480583191
}
|
40,741
|
after extending en_core_web_sm and storing the model to disc as updated_trained_model how do i use it?
|
5bb6c9398c5bb54e1ab24c2a2547dbe3
|
{
"intermediate": 0.5883419513702393,
"beginner": 0.07396595180034637,
"expert": 0.3376920819282532
}
|
40,742
|
def save_to_mysql_or_local_excel(df, table_name, username, password, host, database):
try:
# 创建数据库连接
engine = sa.create_engine(f"mysql+pymysql://{username}:{password}@{host}/{database}")
Session = sessionmaker(bind=engine)
session = Session()
inspector = sa.inspect(engine)
tables_in_database = inspector.get_table_names()
if table_name not in tables_in_database:
df.to_sql(table_name, engine, if_exists='fail', index=False)
else:
primary_key_column = 'address'
updated_fields = [col for col in df.columns if col not in [primary_key_column, 'id']]
block_field = 'is_blocked'
# 从数据库表中获取现有的主键和待更新字段的值
# 从数据库表中获取现有的所有字段的值
existing_data_query = f"SELECT {', '.join([primary_key_column] + updated_fields)} FROM {table_name}"
existing_data = pd.read_sql(existing_data_query, engine)
# # 确定需要更新的数据:在df中存在的address且cost不同
# df_to_update = pd.merge(df, existing_data, on=primary_key_column, how='inner', indicator=True)
# df_to_update = df_to_update[df_to_update[updated_field + "_x"] != df_to_update[updated_field + "_y"]]
# 检测哪些行需要更新
comparison_fields = [primary_key_column] + updated_fields
df_to_update = df.merge(existing_data, on=primary_key_column, suffixes=('', '_old'), how='inner')
df_to_update = df_to_update[df_to_update.apply(lambda x: not all(x[field] == x[f"{field}_old"] for field in updated_fields), axis=1)]
def safe_compare(a, b):
# 检查是否两个值都是数字
if pd.api.types.is_number(a) and pd.api.types.is_number(b):
# 转换为浮点数进行比较
return float(a) == float(b)
elif pd.isna(a) and pd.isna(b):
# 如果两个值都是 NaN,我们认为它们相等
return True
else:
# 对于非数值,我们仅进行普通比较
return a == b
def get_differences(row):
differences = {}
for field in updated_fields:
new_value = row[field]
old_value = row[f"{field}_old"]
if field == 'effects': # 特殊处理 JSON 字段
try:
# 尝试比较 JSON 对象而不是 JSON 字符串
old_json = json.loads(row[f"{field}_old"])
new_json = json.loads(row[field])
if old_json != new_json:
# differences.append(field)
differences[field] = { 'new' : new_value, 'old': old_value}
except json.JSONDecodeError as e:
print(f"Error decoding JSON for field {field} at address {row['address']}: {e}")
# differences.append(field) # 如果 JSON 解析失败,标记为不同
differences[field] = { 'new' : new_value, 'old': old_value}
else:
# 如果 type 不等于 0,则跳过 'Attack' 和 'Health' 字段
if row['type'] != 0 and field in ['attack', 'health']:
continue
if not safe_compare(new_value, old_value):
differences[field] = { 'new' : new_value, 'old': old_value}
# differences.append(field)
# print("differences:", differences)
return differences
df_to_update['differences'] = df_to_update.apply(get_differences, axis=1)
# 只选择那些存在差异的行
df_to_update = df_to_update[df_to_update['differences'].apply(bool)]
# 打印出将要更新的行数
print('Number of rows to be updated: ', df_to_update.shape[0])
# 打印出不同的字段
# 打印出具体不同的字段及其新旧值
for index, row in df_to_update.iterrows():
if row['differences']:
diffs = ["{field}: new({new_val}) vs old({old_val})".format(
field=field,
new_val=differences['new'],
old_val=differences['old']
) for field, differences in row['differences'].items()]
differences_str = ', '.join(diffs)
# print(f"Address {row[primary_key_column]} - Differences: {differences_str}")
# 对于需要更新的每一行
for _, row in df_to_update.iterrows():
# 创建一个字典,其中键是列名,值是新值
row_dict = row.to_dict()
# 创建一个空列表以保存每个字段的更新语句
updates = []
for field in updated_fields:
if row_dict[field] is not None:
# 这里假设所有的字段都是字符串类型,做了字符串的转义处理,如果不全是字符串,需要根据实际情况适当修改
updates.append(f"{field} = '{row_dict[field]}'")
# print(row_dict['effects'])
# 用","拼接所有的更新语句
updates_str = ", ".join(updates)
# 使用拼接好的更新语句创建一个完整的SQL UPDATE语句
update_query = sa.text(f"UPDATE {table_name} SET {updates_str} WHERE {primary_key_column} = '{row_dict[primary_key_column]}'")
result = session.execute(update_query)
# 确定需要插入的新数据:在df中存在但在数据库表中不存在的address
df_to_insert = pd.merge(df, existing_data, on=primary_key_column, how='outer', indicator=True)
df_to_insert = df_to_insert[df_to_insert['_merge'] == 'left_only']
# 打印出将要插入的行数
print('Number of rows to be inserted: ', df_to_insert.shape[0])
# 插入新行到表格中(不使用inplace=True,将结果赋值回给df_to_insert)
df_to_insert = df_to_insert.drop(columns=['_merge'])
# df_to_insert = df_to_insert.rename(columns={updated_field + '_x': updated_field})
df_to_insert.to_sql(table_name, engine, if_exists='append', index=False)
# 确定在数据库中存在,但在df中不存在的数据
df_to_block = pd.merge(df, existing_data, on=primary_key_column, how='right', indicator=True)
# 打印出将要被标记为禁用的行数
df_to_block = df_to_block[df_to_block['_merge'] == 'right_only'].dropna(subset=['address'])
df_to_block = df_to_block.dropna(subset=['address'])
# 打印出将要被标记为禁用的行数
print('Number of vaild rows to be blocked: ', df_to_block.shape[0])
addresses = "','".join(df_to_block[primary_key_column])
addresses = f"'{addresses}'"
# 使用DataFrame中的地址来构建一组独特的地址
unique_addresses = set(df_to_block[primary_key_column].dropna())
# 使用SQLAlchemy的绑定参数来构建更新语句
update_query = sa.text("""
UPDATE skills
SET is_blocked = 1
WHERE address IN :addresses
""")
# with engine.begin() as connection:
result = session.execute(update_query, {'addresses': tuple(unique_addresses)})
print(f"{result.rowcount} records in skills have been blocked.")
delete_query = sa.text(f"""
DELETE FROM skills
WHERE is_blocked = 1
AND address IN ({addresses})
AND NOT EXISTS (
SELECT 1
FROM cards_skill_links
WHERE skills.id = cards_skill_links.skill_id
)
""")
# block_cards_query = sa.text(f"""
# UPDATE cards
# SET is_blocked = 1
# WHERE id IN (
# SELECT card_id
# FROM cards_skill_links
# JOIN skills ON
# skills.id = cards_skill_links.skill_id
# AND skills.is_blocked = 1
# AND skills.address IN ({addresses})
# )
# OR skill IS NULL
# """)
block_cards_query = sa.text(f"""
UPDATE cards
SET is_blocked = 1
WHERE id IN (
SELECT card_id
FROM (
-- 这是第一条更新语句中的查询部分
SELECT cards.id as card_id
FROM cards
LEFT JOIN cards_skill_links ON cards.id = cards_skill_links.card_id
WHERE cards_skill_links.skill_id IS NULL
UNION
-- 这是第二条更新语句中的查询部分
SELECT card_id
FROM cards_skill_links
LEFT JOIN skills ON skills.id = cards_skill_links.skill_id
WHERE
(skills.is_blocked = 1 AND skills.address IN ({addresses}))
OR cards_skill_links.skill_id IS NULL
) as temp_table
);
""")
# with engine.begin() as connection:
# result = connection.execute(delete_query)
# print(f"Deleted {result.rowcount} records from skills.")
# result = connection.execute(block_cards_query)
# print(f"Blocked {result.rowcount} records on cards.")
result = session.execute(delete_query)
print(f"Deleted {result.rowcount} records from skills.")
result = session.execute(block_cards_query)
print(f"Blocked {result.rowcount} records on cards.")
update_card_attr_query = sa.text(f"""
UPDATE cards
INNER JOIN cards_skill_links ON cards.id = cards_skill_links.card_id
INNER JOIN skills ON skills.id = cards_skill_links.skill_id
SET
cards.Cost = skills.Cost,
cards.Attack = skills.Attack,
cards.Health = skills.Health,
cards.card_class = skills.card_class,
cards.Type = skills.Type,
cards.Description = skills.effect_desc
""")
result = session.execute(update_card_attr_query)
print(f"Update {result.rowcount} records on cards.")
# # 将不在df中但存在于数据库的数据标记为禁用
# for _, row in df_to_block.iterrows():
# block_query = sa.text(f"UPDATE {table_name} SET is_blocked = 1 WHERE {primary_key_column}='{row[primary_key_column]}'")
# with engine.begin() as connection:
# result = connection.execute(block_query)
# # 在数据库中查找被阻止 skill 的 id
# skill_ids_query = "SELECT id FROM skills WHERE is_blocked = 1"
# with engine.connect() as connection:
# skill_ids = pd.read_sql(skill_ids_query, connection)
# # 构建 SQL 语句
# sql = f"UPDATE cards SET is_blocked = 1 WHERE skill IN ({','.join(map(str, skill_ids['id'].values))})"
# # 执行 SQL 语句
# with engine.begin() as connection:
# result = connection.execute(sql)
# # 打印影响的行数
# print(f"Number of cards blocked: {result.rowcount}")
session.commit()
except Exception as e:
# 如果数据库操作出错,将数据存储为本地Excel文件
print("Database operation error: ", str(e))
save_to_local_excel(df)
def update_to_mysql_or_local_excel(df, table_name, username, password, host, database):
try:
# 创建数据库连接
engine = sa.create_engine(f"mysql+pymysql://{username}:{password}@{host}/{database}")
Session = sessionmaker(bind=engine)
session = Session()
inspector = sa.inspect(engine)
tables_in_database = inspector.get_table_names()
if table_name not in tables_in_database:
df.to_sql(table_name, engine, if_exists='fail', index=False)
else:
primary_key_column = 'address'
updated_fields = [col for col in df.columns if col not in ['id', 'address']]
block_field = 'is_blocked'
# 从数据库表中获取现有的主键和待更新字段的值
existing_data = pd.read_sql(f'SELECT {primary_key_column} FROM {table_name}', engine)
# 确定需要更新的数据:在df中存在的address且cost不同
df_to_update = pd.merge(df, existing_data, on=primary_key_column, how='inner', indicator=True)
# df_to_update = df_to_update[df_to_update[updated_field + "_x"] != df_to_update[updated_field + "_y"]]
# 打印出将要更新的行数
print('Number of rows to be updated: ', df_to_update.shape[0])
# 确定需要插入的新数据:在df中存在但在数据库表中不存在的address
df_to_insert = pd.merge(df, existing_data, on=primary_key_column, how='outer', indicator=True)
df_to_insert = df_to_insert[df_to_insert['_merge'] == 'left_only']
# 打印出将要插入的行数
print('Number of rows to be inserted: ', df_to_insert.shape[0])
# 插入新行到表格中(不使用inplace=True,将结果赋值回给df_to_insert)
df_to_insert = df_to_insert.drop(columns=['_merge'])
# df_to_insert = df_to_insert.rename(columns={updated_field + '_x': updated_field})
df_to_insert.to_sql(table_name, engine, if_exists='append', index=False)
# 对于需要更新的每一行
for _, row in df_to_update.iterrows():
# 创建一个字典,其中键是列名,值是新值
row_dict = row.to_dict()
# 创建一个空列表以保存每个字段的更新语句和参数
updates = []
params = {}
for field in updated_fields:
if row_dict[field] is not None:
updates.append(f"{field} = :{field}")
params[field] = row_dict[field]
# 创建SQL UPDATE语句,只在有更新需要时执行
if updates:
updates_str = ", ".join(updates)
params['primary_key'] = row_dict[primary_key_column]
update_query = sa.text(f"""
UPDATE {table_name} SET {updates_str} WHERE {primary_key_column} = :primary_key
""")
session.execute(update_query, params)
update_exclude_cards = ["红雨如梭","符湘灵"]
# 假设占位符是 %s,根据您的数据库可能需要调整此占位符
placeholders = ",".join([f":card_name_{i}" for i in range(len(update_exclude_cards))])
update_card_attr_query = sa.text(f"""
UPDATE cards
INNER JOIN cards_skill_links ON cards.id = cards_skill_links.card_id
INNER JOIN skills ON skills.id = cards_skill_links.skill_id
SET
cards.Cost = skills.Cost,
cards.Attack = skills.Attack,
cards.Health = skills.Health,
cards.card_class = skills.card_class,
cards.Type = skills.Type,
cards.Description = skills.effect_desc
WHERE cards.Name NOT IN ({placeholders})
""")
# Mapping card names to named placeholders
bind_parameters = {f"card_name_{i}": card for i, card in enumerate(update_exclude_cards)}
update_card_attr_query = update_card_attr_query.bindparams(**bind_parameters)
result = session.execute(update_card_attr_query)
print(f"Update {result.rowcount} records on cards.")
session.commit()
except Exception as e:
# 如果数据库操作出错,将数据存储为本地Excel文件
print("Database operation error: ", str(e))
session.rollback()
save_to_local_excel(df)
finally:
session.close()
//minion_skills.json
Database operation error: (pymysql.err.OperationalError) (1054, "Unknown column 'attack_x' in 'field list'")
[SQL: INSERT INTO skills (address, attack_x, health_x, effects_x, effect_desc_x, cost_x, card_class_x, is_blocked_x, type_x, attack_y, health_y, effects_y, effect_desc_y, cost_y, card_class_y, is_blocked_y, type_y) VALUES (%(address)s, %(attack_x)s, %(health_x)s, %(effects_x)s, %(effect_desc_x)s, %(cost_x)s, %(card_class_x)s, %(is_blocked_x)s, %(type_x)s, %(attack_y)s, %(health_y)s, %(effects_y)s, %(effect_desc_y)s, %(cost_y)s, %(card_class_y)s, %(is_blocked_y)s, %(type_y)s)]
[parameters: ({'address': '410100000100', 'attack_x': 0, 'health_x': 1, 'effects_x': '[{"keyword": 15, "triggerEffects": [{"luaPath": "CGBuffOneCardAttack", "targetType": 82, "playerInputEventType": 0, "targetValue": 1, "targetDirection": 0}, {"luaPath": "CGBuffOneCardHealth", "playerInputEventType": 0, "targetType": 82, "targetValue": 0, "targetDirection": 0}]}]', 'effect_desc_x': '咒响:己方每使用一张魔法/陷阱卡,自身+1/+0', 'cost_x': 2.0, 'card_class_x': -1.0, 'is_blocked_x': 0.0, 'type_x': 0.0, 'attack_y': None, 'health_y': None, 'effects_y': None, 'effect_desc_y': None, 'cost_y': None, 'card_class_y': None, 'is_blocked_y': None, 'type_y': None}, {'address': '410101000100', 'attack_x': 0, 'health_x': 1, 'effects_x': '[{"keyword": 15, "triggerEffects": [{"luaPath": "CGBuffOneCardAttack", "targetType": 82, "playerInputEventType": 0, "targetValue": 1, "targetDirection": 0}, {"luaPath": "CGBuffOneCardHealth", "playerInputEventType": 0, "targetType": 82, "targetValue": 1, "targetDirection": 0}]}]', 'effect_desc_x': '咒响:己方每使用一张魔法/陷阱卡,自身+1/+1', 'cost_x': 2.0, 'card_class_x': -1.0, 'is_blocked_x': 0.0, 'type_x': 0.0, 'attack_y': None, 'health_y': None, 'effects_y': None, 'effect_desc_y': None, 'cost_y': None, 'card_class_y': None, 'is_blocked_y': None, 'type_y': None}, {'address': '160410001000100', 'attack_x': 0, 'health_x': 1, 'effects_x': '[{"keyword": 11, "triggerEffects": []}, {"keyword": 15, "triggerEffects": [{"luaPath": "CGBuffOneCardAttack", "targetType": 82, "playerInputEventType ... (20 characters truncated) ... 0, "targetDirection": 0}, {"luaPath": "CGBuffOneCardHealth", "playerInputEventType": 0, "targetType": 82, "targetValue": 1, "targetDirection": 0}]}]', 'effect_desc_x': '压制,咒响:己方每使用一张魔法/陷阱卡,自身+0/+1', 'cost_x': 2.0, 'card_class_x': -1.0, 'is_blocked_x': 0.0, 'type_x': 0.0, 'attack_y': None, 'health_y': None, 'effects_y': None, 'effect_desc_y': None, 'cost_y': None, 'card_class_y': None, 'is_blocked_y': None, 'type_y': None}, {'address': '160410100000100', 'attack_x': 0, 'health_x': 1, 'effects_x': '[{"keyword": 11, "triggerEffects": []}, {"keyword": 15, "triggerEffects": [{"luaPath": "CGBuffOneCardAttack", "targetType": 82, "playerInputEventType ... (20 characters truncated) ... 1, "targetDirection": 0}, {"luaPath": "CGBuffOneCardHealth", "playerInputEventType": 0, "targetType": 82, "targetValue": 0, "targetDirection": 0}]}]', 'effect_desc_x': '压制,咒响:己方每使用一张魔法/陷阱卡,自身+1/+0', 'cost_x': 2.0, 'card_class_x': -1.0, 'is_blocked_x': 0.0, 'type_x': 0.0, 'attack_y': None, 'health_y': None, 'effects_y': None, 'effect_desc_y': None, 'cost_y': None, 'card_class_y': None, 'is_blocked_y': None, 'type_y': None}, {'address': '250410001000100', 'attack_x': 0, 'health_x': 1, 'effects_x': '[{"keyword": 1, "triggerEffects": [{"luaPath": "CGMoveCardsToHandDeck", "targetType": 98, "targetValue": 0, "playerInputEventType": 11, "targetDirect ... (141 characters truncated) ... 0, "targetDirection": 0}, {"luaPath": "CGBuffOneCardHealth", "playerInputEventType": 0, "targetType": 82, "targetValue": 1, "targetDirection": 0}]}]', 'effect_desc_x': '战吼:使选定随从移回手牌,咒响:己方每使用一张魔法/陷阱卡,自身+0/+1', 'cost_x': 4.0, 'card_class_x': -1.0, 'is_blocked_x': 0.0, 'type_x': 0.0, 'attack_y': None, 'health_y': None, 'effects_y': None, 'effect_desc_y': None, 'cost_y': None, 'card_class_y': None, 'is_blocked_y': None, 'type_y': None}, {'address': '250410100000100', 'attack_x': 0, 'health_x': 1, 'effects_x': '[{"keyword": 1, "triggerEffects": [{"luaPath": "CGMoveCardsToHandDeck", "targetType": 98, "targetValue": 0, "playerInputEventType": 11, "targetDirect ... (141 characters truncated) ... 1, "targetDirection": 0}, {"luaPath": "CGBuffOneCardHealth", "playerInputEventType": 0, "targetType": 82, "targetValue": 0, "targetDirection": 0}]}]', 'effect_desc_x': '战吼:使选定随从移回手牌,咒响:己方每使用一张魔法/陷阱卡,自身+1/+0', 'cost_x': 4.0, 'card_class_x': -1.0, 'is_blocked_x': 0.0, 'type_x': 0.0, 'attack_y': None, 'health_y': None, 'effects_y': None, 'effect_desc_y': None, 'cost_y': None, 'card_class_y': None, 'is_blocked_y': None, 'type_y': None}, {'address': '152410001000100', 'attack_x': 0, 'health_x': 1, 'effects_x': '[{"keyword": 7, "triggerEffects": [{"luaPath": "CGBuffCardsShield", "targetType": 82, "targetValue": 2, "targetDirection": 2, "playerInputEventType": ... (255 characters truncated) ... 0, "targetDirection": 0}, {"luaPath": "CGBuffOneCardHealth", "playerInputEventType": 0, "targetType": 82, "targetValue": 1, "targetDirection": 0}]}]', 'effect_desc_x': '给身边的2个随从施加星护,咒响:己方每使用一张魔法/陷阱卡,自身+0/+1', 'cost_x': 5.0, 'card_class_x': -1.0, 'is_blocked_x': 0.0, 'type_x': 0.0, 'attack_y': None, 'health_y': None, 'effects_y': None, 'effect_desc_y': None, 'cost_y': None, 'card_class_y': None, 'is_blocked_y': None, 'type_y': None}, {'address': '152410100000100', 'attack_x': 0, 'health_x': 1, 'effects_x': '[{"keyword": 7, "triggerEffects": [{"luaPath": "CGBuffCardsShield", "targetType": 82, "targetValue": 2, "targetDirection": 2, "playerInputEventType": ... (255 characters truncated) ... 1, "targetDirection": 0}, {"luaPath": "CGBuffOneCardHealth", "playerInputEventType": 0, "targetType": 82, "targetValue": 0, "targetDirection": 0}]}]', 'effect_desc_x': '给身边的2个随从施加星护,咒响:己方每使用一张魔法/陷阱卡,自身+1/+0', 'cost_x': 5.0, 'card_class_x': -1.0, 'is_blocked_x': 0.0, 'type_x': 0.0, 'attack_y': None, 'health_y': None, 'effects_y': None, 'effect_desc_y': None, 'cost_y': None, 'card_class_y': None, 'is_blocked_y': None, 'type_y': None} ... displaying 10 of 7152 total bound parameter sets ... {'address': '041410101140500', 'attack_x': 14, 'health_x': 5, 'effects_x': '[{"keyword": 1, "triggerEffects": [{"luaPath": "CGHealCardsOrHeroes", "playerInputEventType": 11, "targetType": 594, "targetValue": 1, "targetDirecti ... (140 characters truncated) ... 1, "targetDirection": 0}, {"luaPath": "CGBuffOneCardHealth", "playerInputEventType": 0, "targetType": 82, "targetValue": 1, "targetDirection": 0}]}]', 'effect_desc_x': '战吼:治疗1点生命值,咒响:己方每使用一张魔法/陷阱卡,自身+1/+1', 'cost_x': 10.0, 'card_class_x': 1.0, 'is_blocked_x': 0.0, 'type_x': 0.0, 'attack_y': None, 'health_y': None, 'effects_y': None, 'effect_desc_y': None, 'cost_y': None, 'card_class_y': None, 'is_blocked_y': None, 'type_y': None}, {'address': '045410001140500', 'attack_x': 14, 'health_x': 5, 'effects_x': '[{"keyword": 1, "triggerEffects": [{"luaPath": "CGHealCardsOrHeroes", "playerInputEventType": 11, "targetType": 594, "targetValue": 5, "targetDirecti ... (140 characters truncated) ... 0, "targetDirection": 0}, {"luaPath": "CGBuffOneCardHealth", "playerInputEventType": 0, "targetType": 82, "targetValue": 1, "targetDirection": 0}]}]', 'effect_desc_x': '战吼:治疗5点生命值,咒响:己方每使用一张魔法/陷阱卡,自身+0/+1', 'cost_x': 10.0, 'card_class_x': 1.0, 'is_blocked_x': 0.0, 'type_x': 0.0, 'attack_y': None, 'health_y': None, 'effects_y': None, 'effect_desc_y': None, 'cost_y': None, 'card_class_y': None, 'is_blocked_y': None, 'type_y': None})]
帮我分析下什么问题
|
60097779527afa06ff33d075ecd3f832
|
{
"intermediate": 0.39303553104400635,
"beginner": 0.41499200463294983,
"expert": 0.19197247922420502
}
|
40,743
|
what will be the condition if(true != is_queue_empty)
{
|
bd9b515b459b43567df800199ef0b640
|
{
"intermediate": 0.36775556206703186,
"beginner": 0.3048434257507324,
"expert": 0.3274010717868805
}
|
40,744
|
create html code with header and footer with carusel text in center
|
e17561c546ce00b22c9ca245db69ded0
|
{
"intermediate": 0.31251323223114014,
"beginner": 0.19881923496723175,
"expert": 0.4886675477027893
}
|
40,745
|
Below i will add my Nav component as well as its parent component (Layout). I want to add a nav toggler on mobile devices to show or hide the nav: Nav.jsx: import Link from "next/link";
import { usePathname } from "next/navigation";
// icons
import { GrBusinessService } from "react-icons/gr";
import { BsPersonWorkspace } from "react-icons/bs";
import { HiHomeModern } from "react-icons/hi2";
import { SiAffinitydesigner } from "react-icons/si";
import { BsFillChatLeftHeartFill } from "react-icons/bs";
import { GrMail } from "react-icons/gr";
// nav data
export const navData = [
{ name: "home", path: "/", Icon: HiHomeModern },
{ name: "about", path: "/about", Icon: BsPersonWorkspace },
{ name: "services", path: "/services", Icon: GrBusinessService },
{ name: "designs", path: "/designs", Icon: SiAffinitydesigner },
{
name: "testimonials",
path: "/testimonials",
Icon: BsFillChatLeftHeartFill,
},
{
name: "contact",
path: "/contact",
Icon: GrMail,
},
];
const Nav = () => {
const pathname = usePathname();
return (
<nav className="flex flex-col items-center xl:justify-center gap-y-4 fixed h-max bottom-0 mt-auto xl:right-[2%] z-50 top-0 w-full xl:w-16 xl:max-w-md xl:h-screen">
<div className="flex w-full xl:flex-col items-center justify-between xl:justify-center gap-y-10 px-4 md:px-40 xl:px-0 h-[80px] xl:h-max py-8 bg-white/10 backdrop-blur-sm text-3xl xl:text-xl xl:rounded-full">
{navData.map((link, i) => (
<Link
className={`${
link.path === pathname && "text-accent"
} relative flex items-center group hover:text-accent transition-all duration-300`}
href={link.path}
key={i}
>
{/* tolltip */}
<div
role="tooltip"
className="absolute pr-14 right-0 hidden xl:group-hover:flex"
>
<div className="bg-[#f13024] border-1 border-red relative flex text-primary items-center p-[8px] rounded-[5px]">
<div className="text-[14px] leading-none font-semibold text-white capitalize">
{link.name}
</div>
{/* triangle */}
<div
className="border-solid border-l-[#f13024] border-l-8 border-y-transparent border-y-[6px] border-r-0 absolute -right-2"
aria-hidden
/>
</div>
</div>
{/* icon */}
<div>
<link.Icon aria-hidden />
</div>
</Link>
))}
</div>
</nav>
);
};
export default Nav;
Layout.jsx: import { Sora } from "next/font/google";
import Head from "next/head";
import Header from "../components/Header";
import Nav from "../components/Nav";
import TopLeftImg from "../components/TopLeftImg";
// setup font
const sora = Sora({
subsets: ["latin"],
variable: "--font-sora",
weight: ["100", "200", "300", "400", "500", "600", "700", "800"],
});
const Layout = ({ children }) => {
return (
<main
className={`page bg-site text-white bg-cover bg-no-repeat ${sora.variable} font-sora relative`}
>
{/* metadata */}
<Head>
<title>Vycta Lemill | Portfolio</title>
<meta
name="description"
content="Vycta Lemill is a Full-stack web developer with 10+ years of experience."
/>
<meta
name="keywords"
content="react, next, nextjs, html, css, javascript, js, modern-ui, modern-ux, portfolio, framer-motion, 3d-website, particle-effect"
/>
<meta name="author" content="Sanidhya Kumar Verma" />
<meta name="theme-color" content="#f13024" />
</Head>
<TopLeftImg />
<Nav />
<Header />
{/* main content */}
{children}
</main>
);
};
export default Layout;
|
aebc58994edac51d4a163d4c4c4fa6a3
|
{
"intermediate": 0.3900550901889801,
"beginner": 0.5101277828216553,
"expert": 0.09981710463762283
}
|
40,746
|
I’m making a spelling bee game with ClojureScript. Here is my code.
(ns spelling-bee.core
(:require
[clojure.string :as str]
[re-frame.core :as rf]
[reagent.core :as ra]
[reagent.dom :as rdom]
[stylefy.core :as stylefy :refer [use-style]]
[stylefy.reagent :as stylefy-reagent]
[spelling-bee.events :as events]
[spelling-bee.words :as words])
(:require-macros
[reagent.core :refer [with-let]]))
(def debug?
^boolean goog.DEBUG)
(set! warn-on-infer false)
;---------- stylefy components ----------
; px vs rem, google
(defn letter-style [letter-validation-sequence]
(case letter-validation-sequence
:required {:color “#4CAF50”}
:valid {:color “#000000”}
:invalid {:color “#AAAAAA” :opacity “0.5”}))
;---------- main page elements ----------
(defn spawn-words-button
“Starts the game with a preset set of words.”
[]
(let [game-started (rf/subscribe [::events/game-started])]
(when-not @game-started
[:button
{:on-click #(rf/dispatch [::events/set-words-and-letters words/word-collection])
:class “button-style”}
“Get Letters!”])))
(defn submit-button
[word]
(let [input-value (rf/subscribe [::events/current-input])]
[:button
{:on-click #(when (seq word)
(println “click!”)
(rf/dispatch [::events/submit-word @input-value]))
:class “button-style”}
“Submit”]))
(defn text-input
“Field for the user to input a word of their choosing.”
[]
(let [input-value (rf/subscribe [::events/current-input])]
[:input {:type “text”
:placeholder “Type here!”
:value @input-value
:on-change #(rf/dispatch [::events/set-current-input (-> % .-target .-value)])
:class “input-style”}]))
(defn shuffle-order-button!
“Shuffles the order of the letters displayed.”
[display-letters]
[:button {:on-click #(rf/dispatch [::events/shuffle-letter-order display-letters])
:class “button-style”}
“Shuffle letters”])
;---------- main page renderer ----------
(defn main-panel []
#{:clj-kondo/ignore [:unresolved-symbol]}
(with-let [name (rf/subscribe [::events/name])
game-started (rf/subscribe [::events/game-started])
words (rf/subscribe [::events/words])
found-words (rf/subscribe [::events/found-words])
common-letter (rf/subscribe [::events/common-letter])
letters (rf/subscribe [::events/letters])
display-letters (rf/subscribe [::events/display-letters])
current-input (rf/subscribe [::events/current-input])
message (rf/subscribe [::events/message])
score (rf/subscribe [::events/score])
database (rf/subscribe [::events/dbdb])]
[:html
[:head
[:title “Spelling Bee!”]
[:style {:id “stylefy-server-styles”} “stylefy-server-styles-content”]
[:style {:id “stylefy-constant-styles”}]
[:style {:id “stylefy-styles”}]]
[:body {:class “body-background”}
[:div
[:div {:class “main-style”}
[:h1
"Hello, " @name]
;[:p "debug: "@database]
[:h3 @message]
[spawn-words-button]
(when @game-started
[:div {:class “main-container-style”}
[:div {:class “main-panel-style”}
[:div (use-style {:text-align “center”})
[text-input]
[submit-button @current-input]]
[:p "Common Letter: " (str (first @common-letter))]
[:p "Other Letters: " (str/join “, " @display-letters)]
[:div (use-style {:text-align “center”})
[shuffle-order-button! @display-letters]]
[:h3 “Your score: " @score]]
[:div {:class “side-panel-style”}
[:h3
“Found words:”]
[:ul (for [word (sort @found-words)] ; sort found words into an alphabetical list
[:li word])]]])
]]]]))
;---------- page load parameters ----------
(defn dev-setup []
(when debug?
(println “dev mode”)))
(defn ^:dev/after-load mount-root []
(rf/clear-subscription-cache!)
(let [root-el (.getElementById js/document “app”)]
(rdom/unmount-component-at-node root-el)
(rdom/render [main-panel] root-el)))
(defn install-global-key-listeners []
(.addEventListener js/window “keydown” events/global-key-handler))
(defn init []
(install-global-key-listeners) ; listen for keypress events
(rf/dispatch-sync [::events/initialize-db]) ; get re-frame atom initialized
(stylefy/init {:dom (stylefy-reagent/init)}) ; set up css
(dev-setup)
(mount-root))
(ns spelling-bee.events
(:require
[clojure.set :as set]
[clojure.string :as str]
[re-frame.core :as rf]))
;---------- our app state atom ----------
(def default-db
{:name “player”
:game-started false
:words #{}
:common-letter #{}
:letters #{}
:display-letters []
:found-words #{}
:current-input “”
:message “Welcome to the Spelling Bee!”
:score 0})
;---------- handlers ----------
(defn global-key-handler [e]
(let [key (.-key e)
input-value (rf/subscribe [::current-input])]
(cond
(re-matches #”[a-zA-Z]” key)
(rf/dispatch [::append-current-input (str key)])
(= key “Enter”)
(rf/dispatch [::submit-word @input-value])
(= key “Backspace”)
(let [subtract-letter #(subs % 0 (dec (count %)))]
(rf/dispatch [::set-current-input (subtract-letter @input-value)]))
:else
nil)))
; remove subscribe, do in functions
;---------- various functions ----------
;; Later this can be substituted with a database call to pull a list of words.
(defn get-unique-letter-collection [word-set]
(-> word-set
vec
str/join
seq
set))
(defn find-common-letter [word-set]
(reduce
set/intersection
(map set (seq word-set))))
(defn validate-word
“Checks the given word against the current word list and letter set to see if it is valid. Gives the following keywords as a result.
:submit-ok :too-short :invalid :no-common :not-in-list :other”
[word word-list letters common-letter]
(cond
(contains? word-list word) :submit-ok ; first check if the word is in the word collection
(> 4 (count (seq word))) :too-short ; check length, notify if less than 3 letters
(not (every? letters (set word))) :invalid ; check if every letter in the word is in letters set
(not (contains? (set word) (first common-letter))) :no-common ; if it does not contain the common letter
(contains? (set word) (first common-letter)) :not-in-list ; then check if the word at least contains common letter
:else :other)) ; generic if it somehow manages to not match one of the above
(defn validate-letter [letter letters common-letter]
(cond
(= letter (str (first common-letter))) :required
(contains? (set letters) letter) :valid
:else :invalid))
(defn calculate-points [word letters]
(cond
(= (get-unique-letter-collection word) (set letters)) (+ (count (seq word)) 7)
(= (count (seq word)) 4) (int 1)
:else (count (seq word))))
;; (map #(validate-letter #{}) (seq “arroyo”))
;---------- subscriptions to data from app state ----------
(rf/reg-sub ::name
(fn [db]
(:name db)))
(rf/reg-sub ::game-started
(fn [db]
(:game-started db)))
(rf/reg-sub ::words
(fn [db]
(:words db)))
(rf/reg-sub ::found-words
(fn [db]
(:found-words db)))
(rf/reg-sub ::common-letter
(fn [db]
(:common-letter db)))
(rf/reg-sub ::letters
(fn [db]
(:letters db)))
(rf/reg-sub ::display-letters
(fn [db]
(:display-letters db)))
(rf/reg-sub ::current-input
(fn [db]
(:current-input db)))
(rf/reg-sub ::message
(fn [db]
(:message db)))
(rf/reg-sub ::score
(fn [db]
(:score db)))
(rf/reg-sub ::dbdb
(fn [db]
db))
;---------- events ----------
(rf/reg-event-db ::initialize-db
(fn [ ]
default-db))
(rf/reg-event-db ::set-words-and-letters
(fn [db [ word-set]]
(let [common-letter (find-common-letter word-set)
letter-coll (get-unique-letter-collection word-set)]
(assoc db :words word-set
:common-letter common-letter
:letters letter-coll
:display-letters (shuffle (vec (remove common-letter letter-coll)))
:game-started true))))
(rf/reg-event-db ::set-current-input
(fn [db [_ input-value]]
(assoc db :current-input input-value)))
(rf/reg-event-db ::append-current-input
(fn [db [_ input-value]]
(update db :current-input str input-value)))
(rf/reg-event-db ::shuffle-letter-order
(fn [db [_ display-letters]]
(assoc db :display-letters (shuffle display-letters))))
(rf/reg-event-db ::submit-word
(fn [db [_ word]]
(let [letters (:letters db)
common-letter (:common-letter db)
words (:words db)
point-val (calculate-points word letters)]
(case (validate-word word words letters common-letter)
:submit-ok (if (contains? (:found-words db) word)
(assoc db :message “You’ve already found that word!”)
(-> db
(update :found-words conj word)
(update :score + point-val)
(assoc :message (str "Great job! You found " word ", worth a score of " point-val “!”)))) ; add the valid word to found words
:too-short (assoc db :message “Only words with 4 letters or more count.”)
:not-in-list (assoc db :message (str “Sorry, " word " isn’t in the word list today.”))
:no-common (assoc db :message “Nice try, but the word needs to contain the common letter.”)
:invalid (assoc db :message “All letters in the word must be from the given letter set.”)
:other (assoc db :message “Try again.”)))))
; use reg-event-fx to dispatch further event to clear input
Can you help me add screenshake events on getting the wrong word?
|
eeac1d281701ae1b305de561878c8cca
|
{
"intermediate": 0.42877471446990967,
"beginner": 0.39788079261779785,
"expert": 0.1733444780111313
}
|
40,747
|
export default class extends mixins(ResizeMixin) {
переписать на vue 3
|
96ba72f9ac0b17fdf318b68cdd97b133
|
{
"intermediate": 0.38799697160720825,
"beginner": 0.35687246918678284,
"expert": 0.2551306188106537
}
|
40,748
|
How do I make and run PHP scripts on a mac
|
1c8b3921165f133979262a6b284ed130
|
{
"intermediate": 0.2657819986343384,
"beginner": 0.6049574017524719,
"expert": 0.1292605698108673
}
|
40,749
|
How to set an upper and lower momory usage for linux kernel to allow it to allocate a tcp buffer sizes with guaranteed memory range?
|
1bfdf2b31e3c51d5595abf1891f0c22a
|
{
"intermediate": 0.3817126452922821,
"beginner": 0.1862543225288391,
"expert": 0.4320330023765564
}
|
40,750
|
write a php code for addition of two numbers
|
71a32a65dae8551159a37f69c8747c84
|
{
"intermediate": 0.3747851848602295,
"beginner": 0.33607977628707886,
"expert": 0.28913503885269165
}
|
40,751
|
I'm using tsfeatures from nixtla tsfeatures
Calculates various features from time series data. Python implementation of the R package tsfeatures.
Installation
You can install the released version of tsfeatures from the Python package index with:
pip install tsfeatures
Usage
The tsfeatures main function calculates by default the features used by Montero-Manso, Talagala, Hyndman and Athanasopoulos in their implementation of the FFORMA model.
from tsfeatures import tsfeatures
This function receives a panel pandas df with columns unique_id, ds, y and optionally the frequency of the data.
tsfeatures(panel, freq=7)
By default (freq=None) the function will try to infer the frequency of each time series (using infer_freq from pandas on the ds column) and assign a seasonal period according to the built-in dictionary FREQS:
FREQS = {'H': 24, 'D': 1,
'M': 12, 'Q': 4,
'W':1, 'Y': 1}
You can use your own dictionary using the dict_freqs argument:
tsfeatures(panel, dict_freqs={'D': 7, 'W': 52})
List of available features
Features
acf_features heterogeneity series_length
arch_stat holt_parameters sparsity
count_entropy hurst stability
crossing_points hw_parameters stl_features
entropy intervals unitroot_kpss
flat_spots lumpiness unitroot_pp
frequency nonlinearity
guerrero pacf_features
See the docs for a description of the features. To use a particular feature included in the package you need to import it:
from tsfeatures import acf_features
tsfeatures(panel, freq=7, features=[acf_features])
this is the output unique_id nperiods seasonal_period trend spike linearity curvature e_acf1 e_acf10 seasonal_strength peak trough
0 10000495_US03_8_1131028 1 52 1 NaN NaN NaN NaN NaN 1 1 1
1 10000650_US03_10_1131023 1 52 1 NaN NaN NaN NaN NaN 1 1 1
2 10000687_US03_8_1131028 1 52 1 NaN NaN NaN NaN NaN 1 1 1
3 10000691_US03_8_1131035 1 52 1 NaN NaN NaN NaN NaN 1 1 1
4 10000696_US03_8_1131028 1 52 1 NaN NaN NaN NaN NaN 1 1 1
5 10000701_US03_8_1131028 1 52 1 NaN NaN NaN NaN NaN 1 1 1
6 10000701_US03_8_5643999 1 52 1 NaN NaN NaN NaN NaN 1 1 1
7 10000702_US03_8_1131028 1 52 1 NaN NaN NaN NaN NaN 1 1 1
8 11000263_US01_10_5076999 1 52 1 NaN NaN NaN NaN NaN 1 1 1
9 11000263_US01_10_6067493 1 52 0 0.000000e+00 0.000000e+00 -0.000000e+00 NaN NaN 1 6 1
10 11000288_US01_10_5076999 1 52 0 NaN 0.000000e+00 NaN NaN NaN 1 2 1
11 11000288_US01_10_6665066 1 52 0 0.000000e+00 0.000000e+00 -0.000000e+00 NaN NaN 1 2 3
12 11000326_US01_8_1131023 1 52 1 NaN NaN NaN NaN NaN 1 1 1
13 11000327_US01_8_1131037 1 52 1 NaN NaN NaN NaN NaN 1 1 1
14 11000329_US01_8_5072051 1 52 1 NaN NaN NaN NaN NaN 1 1 1
15 11000329_US01_8_5076999 1 52 0 1.665659e-69 1.025779e-16 2.134575e-18 -0.014356 0.510390 1 1 16
16 11000329_US01_8_5338738 1 52 0 4.789750e-70 2.192671e-16 1.150726e-18 -0.125017 0.551564 1 27 13
17 11000329_US01_8_5339183 1 52 0 2.456462e-69 4.526088e-17 -2.142536e-17 0.032159 0.062825 1 5 13
18 11000329_US01_8_5641466 1 52 0 7.145710e-71 1.276334e-17 3.771750e-19 -0.129093 0.149062 1 2 3
19 11000329_US01_8_6304687 1 52 1 NaN NaN NaN NaN NaN 1 1 1 which value would be the season_length? this is my ensemble model from statsforecast import StatsForecast
from statsforecast.models import AutoARIMA, AutoETS, DynamicOptimizedTheta
from statsforecast.utils import ConformalIntervals
import numpy as np
import polars as pl
# Polars option to display all rows
pl.Config.set_tbl_rows(None)
# Initialize the models
models = [
AutoARIMA(season_length=12),
AutoETS(damped=True, season_length=12),
DynamicOptimizedTheta(season_length=12)
]
# Initialize the StatsForecast model
sf = StatsForecast(models=models, freq='1w', n_jobs=-1)
# Perform cross-validation with a step size of 1 to mimic an expanding window
crossvalidation_df = sf.cross_validation(df=y_cl4_filtered, h=2, step_size=1, n_windows=8, sort_df=True)
# Calculate the ensemble mean
ensemble = crossvalidation_df[['AutoARIMA', 'AutoETS', 'DynamicOptimizedTheta']].mean(axis=1)
# Create a Series for the ensemble mean
ensemble_series = pl.Series('Ensemble', ensemble)
# Add the ensemble mean as a new column to the DataFrame
crossvalidation_df = crossvalidation_df.with_columns(ensemble_series)
def wmape(y_true, y_pred):
return np.abs(y_true - y_pred).sum() / np.abs(y_true).sum()
# Calculate the WMAPE for the ensemble model
wmape_value = wmape(crossvalidation_df['y'], crossvalidation_df['Ensemble'])
print('Average WMAPE for Ensemble: ', round(wmape_value, 4))
# Calculate the errors for the ensemble model
errors = crossvalidation_df['y'] - crossvalidation_df['Ensemble']
# For an individual forecast
individual_accuracy = 1 - (abs(crossvalidation_df['y'] - crossvalidation_df['Ensemble']) / crossvalidation_df['y'])
individual_bias = (crossvalidation_df['Ensemble'] / crossvalidation_df['y']) - 1
# Add these calculations as new columns to DataFrame
crossvalidation_df = crossvalidation_df.with_columns([
individual_accuracy.alias("individual_accuracy"),
individual_bias.alias("individual_bias")
])
# Print the individual accuracy and bias for each week
for row in crossvalidation_df.to_dicts():
id = row['unique_id']
date = row['ds']
accuracy = row['individual_accuracy']
bias = row['individual_bias']
print(f"{id}, {date}, Individual Accuracy: {accuracy:.4f}, Individual Bias: {bias:.4f}")
# For groups of forecasts
group_accuracy = 1 - (errors.abs().sum() / crossvalidation_df['y'].sum())
group_bias = (crossvalidation_df['Ensemble'].sum() / crossvalidation_df['y'].sum()) - 1
# Print the average group accuracy and group bias over all folds for the ensemble model
print('Average Group Accuracy: ', round(group_accuracy, 4))
print('Average Group Bias: ', round(group_bias, 4))
# Fit the models on the entire dataset
sf.fit(y_cl4_fit_1)
# Instantiate the ConformalIntervals class
prediction_intervals = ConformalIntervals()
# Generate 24 months forecasts
forecasts_df = sf.forecast(h=52*2, prediction_intervals=prediction_intervals, level=[95], id_col='unique_id', sort_df=True)
# Apply the non-negative constraint to the forecasts of individual models
forecasts_df = forecasts_df.with_columns([
pl.when(pl.col('AutoARIMA') < 0).then(0).otherwise(pl.col('AutoARIMA')).alias('AutoARIMA'),
pl.when(pl.col('AutoETS') < 0).then(0).otherwise(pl.col('AutoETS')).alias('AutoETS'),
pl.when(pl.col('DynamicOptimizedTheta') < 0).then(0).otherwise(pl.col('DynamicOptimizedTheta')).alias('DynamicOptimizedTheta'),
])
# Calculate the ensemble forecast
ensemble_forecast = forecasts_df[['AutoARIMA', 'AutoETS', 'DynamicOptimizedTheta']].mean(axis=1)
# Calculate the lower and upper prediction intervals for the ensemble forecast
ensemble_lo_95 = forecasts_df.select(
[
pl.when(pl.col('AutoARIMA-lo-95') < 0).then(0).otherwise(pl.col('AutoARIMA-lo-95')).alias('AutoARIMA-lo-95'),
pl.when(pl.col('AutoETS-lo-95') < 0).then(0).otherwise(pl.col('AutoETS-lo-95')).alias('AutoETS-lo-95'),
pl.when(pl.col('DynamicOptimizedTheta-lo-95') < 0).then(0).otherwise(pl.col('DynamicOptimizedTheta-lo-95')).alias('DynamicOptimizedTheta-lo-95'),
]
).mean(axis=1)
ensemble_hi_95 = forecasts_df[['AutoARIMA-hi-95', 'AutoETS-hi-95', 'DynamicOptimizedTheta-hi-95']].mean(axis=1)
# Create Series for the ensemble forecast and its prediction intervals
ensemble_forecast_series = pl.Series('EnsembleForecast', ensemble_forecast)
ensemble_lo_95_series = pl.Series('Ensemble-lo-95', ensemble_lo_95)
ensemble_hi_95_series = pl.Series('Ensemble-hi-95', ensemble_hi_95)
# Add the ensemble forecast and its prediction intervals as new columns to the DataFrame
forecasts_df = forecasts_df.with_columns([ensemble_forecast_series, ensemble_lo_95_series, ensemble_hi_95_series])
# Round the ensemble forecast and prediction intervals and convert to integer
forecasts_df = forecasts_df.with_columns([
pl.col("EnsembleForecast").round().cast(pl.Int32),
pl.col("Ensemble-lo-95").round().cast(pl.Int32),
pl.col("Ensemble-hi-95").round().cast(pl.Int32)
])
# Split the unique_id concat into the original columns
def split_unique_id(unique_id):
parts = unique_id.split('_')
return parts if len(parts) >= 4 else (parts + [None] * (4 - len(parts)))
forecasts_df = (
forecasts_df
.with_columns([
pl.col('unique_id').apply(lambda uid: split_unique_id(uid)[0]).alias('MaterialID'),
pl.col('unique_id').apply(lambda uid: split_unique_id(uid)[1]).alias('SalesOrg'),
pl.col('unique_id').apply(lambda uid: split_unique_id(uid)[2]).alias('DistrChan'),
pl.col('unique_id').apply(lambda uid: split_unique_id(uid)[3]).alias('CL4'),
])
.drop('unique_id')
)
# Rename ‘ds’ to ‘WeekDate’
forecasts_df = forecasts_df.rename({'ds': 'WeekDate'})
# Reorder the columns
forecasts_df = forecasts_df.select([
"MaterialID",
"SalesOrg",
"DistrChan",
"CL4",
"WeekDate",
"EnsembleForecast",
"Ensemble-lo-95",
"Ensemble-hi-95",
"AutoARIMA",
"AutoARIMA-lo-95",
"AutoARIMA-hi-95",
"AutoETS",
"AutoETS-lo-95",
"AutoETS-hi-95",
"DynamicOptimizedTheta",
"DynamicOptimizedTheta-lo-95",
"DynamicOptimizedTheta-hi-95"
])
# Create an empty list
forecasts_list = []
# Append each row to the list
for row in forecasts_df.to_dicts():
forecasts_list.append(row)
# Print the list
for forecast in forecasts_list:
print(forecast) you see right here I put season_length= 12, I'm trying to make it automatic, so how do I do that from the output of stl_features models = [
AutoARIMA(season_length=12),
AutoETS(damped=True, season_length=12),
DynamicOptimizedTheta(season_length=12)
]
|
149da4d3a80d30e750ba84c0ff171980
|
{
"intermediate": 0.30117276310920715,
"beginner": 0.2870209515094757,
"expert": 0.4118063151836395
}
|
40,752
|
I’m making a spelling bee game with ClojureScript. Here is my code.
(ns spelling-bee.core
(:require
[clojure.string :as str]
[re-frame.core :as rf]
[reagent.core :as ra]
[reagent.dom :as rdom]
[stylefy.core :as stylefy :refer [use-style]]
[stylefy.reagent :as stylefy-reagent]
[spelling-bee.events :as events]
[spelling-bee.words :as words])
(:require-macros
[reagent.core :refer [with-let]]))
(def debug?
^boolean goog.DEBUG)
(set! *warn-on-infer* false)
;---------- stylefy components ----------
; px vs rem, google
(defn letter-style [letter-validation-sequence]
(case letter-validation-sequence
:required {:color "#4CAF50"}
:valid {:color "#000000"}
:invalid {:color "#AAAAAA" :opacity "0.5"}))
;---------- main page elements ----------
(defn spawn-words-button
"Starts the game with a preset set of words."
[]
(let [game-started (rf/subscribe [::events/game-started])]
(when-not @game-started
[:button
{:on-click #(rf/dispatch [::events/set-words-and-letters words/word-collection])
:class "button-style"}
"Get Letters!"])))
(defn submit-button
[word]
(let [input-value (rf/subscribe [::events/current-input])]
[:button
{:on-click #(when (seq word)
(println "click!")
(rf/dispatch [::events/submit-word @input-value]))
:class "button-style"}
"Submit"]))
(defn text-input
"Field for the user to input a word of their choosing."
[]
(let [input-value (rf/subscribe [::events/current-input])]
[:input {:type "text"
:placeholder "Type here!"
:value @input-value
:on-change #(rf/dispatch [::events/set-current-input (-> % .-target .-value)])
:class "input-style"}]))
(defn shuffle-order-button!
"Shuffles the order of the letters displayed."
[display-letters]
[:button {:on-click #(rf/dispatch [::events/shuffle-letter-order display-letters])
:class "button-style"}
"Shuffle letters"])
;---------- main page renderer ----------
(defn main-panel []
#_{:clj-kondo/ignore [:unresolved-symbol]}
(with-let [
name (rf/subscribe [::events/name])
game-started (rf/subscribe [::events/game-started])
words (rf/subscribe [::events/words])
found-words (rf/subscribe [::events/found-words])
common-letter (rf/subscribe [::events/common-letter])
letters (rf/subscribe [::events/letters])
display-letters (rf/subscribe [::events/display-letters])
current-input (rf/subscribe [::events/current-input])
message (rf/subscribe [::events/message])
score (rf/subscribe [::events/score])
database (rf/subscribe [::events/dbdb])
shake-message? (rf/subscribe [::events/shake-message?])]
[:div
[:div {:class "main-style"}
[:h1
"Hello, " @name]
[:p "debug: "@database]
[:h3 {:class (str ""(when @shake-message? "shake"))} @message]
[spawn-words-button]
(when @game-started
[:div {:class "main-container-style"}
[:div {:class "main-panel-style"}
[:div (use-style {:text-align "center"})
[text-input]
[submit-button @current-input]]
[:p "Common Letter: " (str (first @common-letter))]
[:p "Other Letters: " (str/join ", " @display-letters)]
[:div (use-style {:text-align "center"})
[shuffle-order-button! @display-letters]]
[:h3 "Your score: " @score]]
[:div {:class "side-panel-style"}
[:h3
"Found words:"]
[:ul (for [word (sort @found-words)] ; sort found words into an alphabetical list
[:li word])]]])
]]))
;---------- page load parameters ----------
(defn dev-setup []
(when debug?
(println "dev mode")))
(defn ^:dev/after-load mount-root []
(rf/clear-subscription-cache!)
(let [root-el (.getElementById js/document "app")]
(rdom/unmount-component-at-node root-el)
(rdom/render [main-panel] root-el)))
(defn install-global-key-listeners []
(.addEventListener js/window "keydown" events/global-key-handler))
(defn init []
(install-global-key-listeners) ; listen for keypress events
(rf/dispatch-sync [::events/initialize-db]) ; get re-frame atom initialized
(stylefy/init {:dom (stylefy-reagent/init)}) ; set up css
(dev-setup)
(mount-root))
(ns spelling-bee.events
(:require
[clojure.set :as set]
[clojure.string :as str]
[re-frame.core :as rf]))
;---------- our app state atom ----------
(def default-db
{:name "player"
:game-started false
:words #{}
:common-letter #{}
:letters #{}
:display-letters []
:found-words #{}
:current-input ""
:message "Welcome to the Spelling Bee!"
:score 0
:shake-message false})
;---------- handlers ----------
(defn global-key-handler [e]
(let [key (.-key e)
input-value (rf/subscribe [::current-input])]
(cond
(re-matches #"[a-zA-Z]" key)
(rf/dispatch [::append-current-input (str key)])
(= key "Enter")
(rf/dispatch [::submit-word @input-value])
(= key "Backspace")
(let [subtract-letter #(subs % 0 (dec (count %)))]
(rf/dispatch [::set-current-input (subtract-letter @input-value)]))
:else
nil)))
; remove subscribe, do in functions
;---------- various functions ----------
;; Later this can be substituted with a database call to pull a list of words.
(defn get-unique-letter-collection [word-set]
(-> word-set
vec
str/join
seq
set))
(defn find-common-letter [word-set]
(reduce
set/intersection
(map set (seq word-set))))
(defn validate-word
"Checks the given word against the current word list and letter set to see if it is valid. Gives the following keywords as a result.
:submit-ok :too-short :invalid :no-common :not-in-list :other"
[word word-list letters common-letter]
(cond
(contains? word-list word) :submit-ok ; first check if the word is in the word collection
(> 4 (count (seq word))) :too-short ; check length, notify if less than 3 letters
(not (every? letters (set word))) :invalid ; check if every letter in the word is in letters set
(not (contains? (set word) (first common-letter))) :no-common ; if it does not contain the common letter
(contains? (set word) (first common-letter)) :not-in-list ; then check if the word at least contains common letter
:else :other)) ; generic if it somehow manages to not match one of the above
(defn validate-letter [letter letters common-letter]
(cond
(= letter (str (first common-letter))) :required
(contains? (set letters) letter) :valid
:else :invalid))
(defn calculate-points [word letters]
(cond
(= (get-unique-letter-collection word) (set letters)) (+ (count (seq word)) 7)
(= (count (seq word)) 4) (int 1)
:else (count (seq word))))
;; (map #(validate-letter #{}) (seq "arroyo"))
;---------- subscriptions to data from app state ----------
(rf/reg-sub ::name
(fn [db]
(:name db)))
(rf/reg-sub ::game-started
(fn [db]
(:game-started db)))
(rf/reg-sub ::words
(fn [db]
(:words db)))
(rf/reg-sub ::found-words
(fn [db]
(:found-words db)))
(rf/reg-sub ::common-letter
(fn [db]
(:common-letter db)))
(rf/reg-sub ::letters
(fn [db]
(:letters db)))
(rf/reg-sub ::display-letters
(fn [db]
(:display-letters db)))
(rf/reg-sub ::current-input
(fn [db]
(:current-input db)))
(rf/reg-sub ::message
(fn [db]
(:message db)))
(rf/reg-sub ::score
(fn [db]
(:score db)))
(rf/reg-sub ::dbdb
(fn [db]
db))
(rf/reg-sub ::shake-message?
(fn [db]
(:shake-message db)))
;---------- events ----------
(rf/reg-event-db ::initialize-db
(fn [_ _]
default-db))
(rf/reg-event-db ::set-words-and-letters
(fn [db [_ word-set]]
(let [common-letter (find-common-letter word-set)
letter-coll (get-unique-letter-collection word-set)]
(assoc db :words word-set
:common-letter common-letter
:letters letter-coll
:display-letters (shuffle (vec (remove common-letter letter-coll)))
:game-started true))))
(rf/reg-event-db ::set-current-input
(fn [db [_ input-value]]
(assoc db :current-input input-value)))
(rf/reg-event-db ::append-current-input
(fn [db [_ input-value]]
(update db :current-input str input-value)))
(rf/reg-event-db ::shuffle-letter-order
(fn [db [_ display-letters]]
(assoc db :display-letters (shuffle display-letters))))
;; (rf/reg-fx
;; ::trigger-shake
;; (fn []
;; (.classList.add root “shake”)
;; (js/setTimeout
;; #(when root
;; (.classList.remove root “shake”))
;; 500)))
(rf/reg-event-db ::submit-word
(fn [db [_ word]]
(let [letters (:letters db)
common-letter (:common-letter db)
words (:words db)
point-val (calculate-points word letters)
submit (partial assoc db :shake-message false :current-input "" :message)]
(case (validate-word word words letters common-letter)
:submit-ok (if (contains? (:found-words db) word)
(submit "You've already found that word!")
(-> db
(update :found-words conj word)
(update :score + point-val)
(assoc :current-input "" :message (str "Great job! You found " word ", worth a score of " point-val "!")))) ; add the valid word to found words
:too-short (submit "Only words with 4 letters or more count.")
:not-in-list (submit (str "Sorry, " word " isn't in the word list today."))
:no-common (submit "Nice try, but the word needs to contain the common letter.")
:invalid (submit "All letters in the word must be from the given letter set." :shake-message true)
:other (submit "Try again." )))))
; use reg-event-fx to dispatch further event to clear input
I want to make the message shake event happen every time the user inputs that incorrect value, I am assuming I'd need some sort of timer to set it back?
|
f66345a0ec9c3d6c27a3735c9a31b686
|
{
"intermediate": 0.5066001415252686,
"beginner": 0.33415883779525757,
"expert": 0.15924108028411865
}
|
40,753
|
Training evaluation, job design and education pricing
Assume that a worker is thinking of doing a full-time MBA with a duration of two years in a Business School. In her current work, she is earning (productivity) each year 22.000 € and she is obtaining an annual return with her current financial investments of 5% (a continuous rate of interest r ).
a)
If the registration fees are null, by the model explained in class (infinite life), which has to be the minimum expected wage (productivity) of the worker after two years of MBA to make it attractive to the worker?
b)
The current firm takes profit of the master when delegates some decisions to the manager. The firm willingness to pay the worker is a 1.000 € increase in the annual wage (because of the expected increase in productivity) due to the knowledge acquired in the MBA. Is the worker going to do the MBA given this offer? What is the minimum expected increase in the firm productivity due to the MBA needed to delegate those decisions?
c)
Let us assume that the expected wage of those workers with a MBA is 25.000 €. What is the maximum fee (paid at the end of the master) that can charge the Business School under the condition that the MBA follows attractive for the worker?
(Help: Consider that the fee (F) is a loan with perpetual annuities A; F=A/r)
|
53f460d744b65db76e7a7244628e1d8c
|
{
"intermediate": 0.3236619830131531,
"beginner": 0.38710519671440125,
"expert": 0.2892327904701233
}
|
40,754
|
React display a list of tokens for each person with their information extracted from a users.json file in local folder src using only props and children to send information to this components: UserPicture, UserName and UserLocation inside a folder for each user called User inside another folder called Users
|
8ff8c5de1899cb5bcc70ed7130ed23e5
|
{
"intermediate": 0.4761856496334076,
"beginner": 0.1582438349723816,
"expert": 0.3655704855918884
}
|
40,755
|
Whats sysctl settings for a server to maintain 1000000 connections?
|
fd0a151d4008f85459b443136b552da1
|
{
"intermediate": 0.3297857940196991,
"beginner": 0.32722726464271545,
"expert": 0.34298691153526306
}
|
40,756
|
is julia compatible with python
|
bc6f90c70768b09959b9aceb254898ed
|
{
"intermediate": 0.3280150294303894,
"beginner": 0.24196866154670715,
"expert": 0.4300163686275482
}
|
40,757
|
Make a Generic code converter for all bases using the knowledge you acquired form your Digital
Logic Design class. Remember that the base can be any so handle except negative. Do in c++ , use namespace std
|
bce8002ae1793139d91de4b3c032d4b6
|
{
"intermediate": 0.27850672602653503,
"beginner": 0.31262463331222534,
"expert": 0.40886861085891724
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.