row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
48,192
|
как исправить ошибку? Error updating data: Cannot connect to host dev.uust-time.ru:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)')]
Error updating data: Cannot connect to host dev.uust-time.ru:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)')]
|
ef6e623b4c33756039ec6d367a65a814
|
{
"intermediate": 0.3364465534687042,
"beginner": 0.28136956691741943,
"expert": 0.38218387961387634
}
|
48,193
|
write java code: Create concurrent Refinable Hashset data structure with 1 Million nodes and perform the basic operations (contains, insert, and remove) by varying the number of threads from 1 to 20 (i.e., 1, 2, 4, 6, 8, 10, 12 ... 20) and for different workloads (100C-0I-0D, 90C-9I-1D, 50C-25I-25D, 30C-35I-35D, 0C-50I-50D). Prefill the data structure with 50% of elements and duration of each run is 10 seconds.
To measure the throghput, consider the average of FIVE runs and also measure
the cache misses per 1000 operations using perf tool
|
01ca5536e1670275c5431ff2247e411a
|
{
"intermediate": 0.5024796724319458,
"beginner": 0.24879270792007446,
"expert": 0.24872764945030212
}
|
48,194
|
SELECT tasks.id, tasks.task,
array_to_string(array_agg(solution.text_of_solution), ' ') as solution,
array_to_string(array_agg(topics.name), ' ') as topics
FROM tasks
LEFT OUTER JOIN solution ON tasks.id = solution.tasks_id
LEFT OUTER JOIN tasks_topics ON tasks.id = tasks_topics.task_id
LEFT OUTER JOIN topics ON tasks_topics.topics_id = topics.id
GROUP BY tasks.id;
|
cfaaa24a8bac297bde27046d96752e75
|
{
"intermediate": 0.297050416469574,
"beginner": 0.3383449614048004,
"expert": 0.3646046221256256
}
|
48,195
|
Hello I need a underwater shader for unity urp can you help?
|
4b8498a9680f41ebf96565472a950751
|
{
"intermediate": 0.42600446939468384,
"beginner": 0.2983543276786804,
"expert": 0.27564123272895813
}
|
48,196
|
fix this script so can run correctly for backtesting strategy : import numpy as np
import pandas as pd
from backtesting import Strategy, Backtest
import talib
import MetaTrader5 as mt5
if not mt5.initialize():
print("initialize() failed, error code =", mt5.last_error())
mt5.shutdown()
symbol = 'XAUUSDm'
timeframe = mt5.TIMEFRAME_M5
rates = mt5.copy_rates_from_pos(symbol, timeframe, 0, 10000)
df = pd.DataFrame(rates)
df['time'] = pd.to_datetime(df['time'], unit='s')
df.set_index('time', inplace=True)
df.drop(['spread', 'real_volume'], axis=1, inplace=True)
def macd(df):
macd_result = talib.MACD(df['close'], fastperiod=12, slowperiod=26, signalperiod=9)
macd = macd_result[0]
signal = macd_result[1]
if macd.iloc[-1] > signal.iloc[-1]:
return "Buy"
elif macd.iloc[-1] < signal.iloc[-1]:
return "Sell"
def twin_range_filter(df):
close = df['close']
def smoothrng(x, t, m):
wper = t * 2 - 1
avrng = talib.EMA(np.abs(x.diff()), timeperiod=t)
smoothrng = talib.EMA(avrng, timeperiod=wper) * m
return smoothrng
per1, mult1, per2, mult2 = 27, 1.6, 55, 2.0
smrng1 = smoothrng(close, per1, mult1)
smrng2 = smoothrng(close, per2, mult2)
smrng = (smrng1 + smrng2) / 2
def rngfilt(x, r):
rngfilt = x.copy()
for i in range(1, len(x)):
prev_val = rngfilt.iloc[i-1]
if x.iloc[i] > prev_val:
rngfilt.iloc[i] = max(prev_val, x.iloc[i] - r.iloc[i])
else:
rngfilt.iloc[i] = min(prev_val, x.iloc[i] + r.iloc[i])
return rngfilt
filt = rngfilt(close, smrng)
STR = filt + smrng
STS = filt - smrng
FUB = [STR.iloc[0]]
FLB = [STS.iloc[0]]
for i in range(1, len(df)):
FUB.append(STR.iloc[i] if (STR.iloc[i] < STR.iloc[i-1]) or (close.iloc[i-1] > FUB[i-1]) else FUB[i-1])
FLB.append(STS.iloc[i] if (STS.iloc[i] > STS.iloc[i-1]) or (close.iloc[i-1] < FLB[i-1]) else FLB[i-1])
FUB = np.array(FUB)
FLB = np.array(FLB)
TRF = [FUB[0]]
for i in range(1, len(df)):
last_trf = TRF[-1]
if (last_trf == FUB[i-1] and close.iloc[i] <= FUB[i]) or (last_trf == FLB[i-1] and close.iloc[i] <= FLB[i]):
TRF.append(FUB[i])
elif (last_trf == FUB[i-1] and close.iloc[i] >= FUB[i]) or (last_trf == FLB[i-1] and close.iloc[i] >= FLB[i]):
TRF.append(FLB[i])
else:
TRF.append(FUB[i])
TRF = np.array(TRF)
long_signal = (close > np.roll(TRF, 1))[1:]
short_signal = (close < np.roll(TRF, 1))[1:]
df.loc[:, 'TRF'] = TRF
df.loc[:, 'long_signal'] = np.append([False], long_signal)
df.loc[:, 'short_signal'] = np.append([False], short_signal)
if df.iloc[-1]['long_signal']:
return "Buy"
elif df.iloc[-1]['short_signal']:
return "Sell"
def detect_engulfing(df):
for i in range(1, len(df)):
current = df.iloc[i].copy()
previous = df.iloc[i-1].copy()
if np.abs(current['open'] - previous['close']) > 0.005:
current['open'] = previous['close']
if previous['open'] > previous['close'] and \
current['close'] > current['open'] and \
current['close'] >= previous['open'] and \
previous['close'] >= current['open'] and \
current['close'] - current['open'] > previous['open'] - previous['close']:
return "Buy"
elif previous['close'] > previous['open'] and \
current['open'] > current['close'] and \
current['open'] >= previous['close'] and \
previous['open'] >= current['close'] and \
current['open'] - current['close'] > previous['close'] - previous['open']:
return "Sell"
else:
return None
df['signal'] = None
for row in range(len(df)):
macd_signal = macd(df.iloc[:row+1])
trf_signal = twin_range_filter(df.iloc[:row+1])
engulfing_signal = detect_engulfing(df.iloc[:row+1])
if macd_signal == "Sell" and trf_signal == "Buy" and engulfing_signal == "Buy":
df.at[row, 'signal'] = 1
elif macd_signal == "Buy" and trf_signal == "Sell" and engulfing_signal == "Sell":
df.at[row, 'signal'] = 2
else:
df.at[row, 'signal'] = 0
count_sell_signals = df[df['signal'] == 2].shape[0]
print("Jumlah sinyal sell:", count_sell_signals)
df.columns = ['Local time', 'Open', 'High', 'Low', 'Close', 'Volume', 'signal']
df = df.iloc[100:200]
print(df)
class ChubbStrategy(Strategy):
def init(self):
self.macd_signal = self.I(self.macd)
self.trf_signal = self.I(self.twin_range_filter)
self.engulfing_signal = self.I(self.detect_engulfing)
def next(self):
# Check for bullish engulfing condition
if self.macd_signal == "Sell" and self.trf_signal == "Buy" and self.engulfing_signal == "Bullish Engulfing":
self.buy()
# Check for bearish engulfing condition
elif self.macd_signal == "Buy" and self.trf_signal == "Sell" and self.engulfing_signal == "Bearish Engulfing":
self.sell()
bt = Backtest(df, ChubbStrategy, cash=10000, commission=0.0)
# Run the backtest
stats = bt.run()
# Print the performance statistics
print(stats)
|
0df236c74d3a75ac93a5c77b0d264ae9
|
{
"intermediate": 0.3374716639518738,
"beginner": 0.3575368821620941,
"expert": 0.3049914836883545
}
|
48,197
|
fix this script completely so can run correctly : import numpy as np
import pandas as pd
from backtesting import Strategy, Backtest
import talib
import MetaTrader5 as mt5
if not mt5.initialize():
print("initialize() failed, error code =", mt5.last_error())
mt5.shutdown()
symbol = 'XAUUSDm'
timeframe = mt5.TIMEFRAME_M5
rates = mt5.copy_rates_from_pos(symbol, timeframe, 0, 10000)
df = pd.DataFrame(rates)
df['time'] = pd.to_datetime(df['time'], unit='s')
df.set_index('time', inplace=True)
df.drop(['spread', 'real_volume'], axis=1, inplace=True)
def macd(df):
macd_result = talib.MACD(df['close'], fastperiod=12, slowperiod=26, signalperiod=9)
macd = macd_result[0]
signal = macd_result[1]
if macd.iloc[-1] > signal.iloc[-1]:
return "Buy"
elif macd.iloc[-1] < signal.iloc[-1]:
return "Sell"
def twin_range_filter(df):
close = df['close']
def smoothrng(x, t, m):
wper = t * 2 - 1
avrng = talib.EMA(np.abs(x.diff()), timeperiod=t)
smoothrng = talib.EMA(avrng, timeperiod=wper) * m
return smoothrng
per1, mult1, per2, mult2 = 27, 1.6, 55, 2.0
smrng1 = smoothrng(close, per1, mult1)
smrng2 = smoothrng(close, per2, mult2)
smrng = (smrng1 + smrng2) / 2
def rngfilt(x, r):
rngfilt = x.copy()
for i in range(1, len(x)):
prev_val = rngfilt.iloc[i-1]
if x.iloc[i] > prev_val:
rngfilt.iloc[i] = max(prev_val, x.iloc[i] - r.iloc[i])
else:
rngfilt.iloc[i] = min(prev_val, x.iloc[i] + r.iloc[i])
return rngfilt
filt = rngfilt(close, smrng)
STR = filt + smrng
STS = filt - smrng
FUB = [STR.iloc[0]]
FLB = [STS.iloc[0]]
for i in range(1, len(df)):
FUB.append(STR.iloc[i] if (STR.iloc[i] < STR.iloc[i-1]) or (close.iloc[i-1] > FUB[i-1]) else FUB[i-1])
FLB.append(STS.iloc[i] if (STS.iloc[i] > STS.iloc[i-1]) or (close.iloc[i-1] < FLB[i-1]) else FLB[i-1])
FUB = np.array(FUB)
FLB = np.array(FLB)
TRF = [FUB[0]]
for i in range(1, len(df)):
last_trf = TRF[-1]
if (last_trf == FUB[i-1] and close.iloc[i] <= FUB[i]) or (last_trf == FLB[i-1] and close.iloc[i] <= FLB[i]):
TRF.append(FUB[i])
elif (last_trf == FUB[i-1] and close.iloc[i] >= FUB[i]) or (last_trf == FLB[i-1] and close.iloc[i] >= FLB[i]):
TRF.append(FLB[i])
else:
TRF.append(FUB[i])
TRF = np.array(TRF)
long_signal = (close > np.roll(TRF, 1))[1:]
short_signal = (close < np.roll(TRF, 1))[1:]
df.loc[:, 'TRF'] = TRF
df.loc[:, 'long_signal'] = np.append([False], long_signal)
df.loc[:, 'short_signal'] = np.append([False], short_signal)
if df.iloc[-1]['long_signal']:
return "Buy"
elif df.iloc[-1]['short_signal']:
return "Sell"
def detect_engulfing(df):
for i in range(1, len(df)):
current = df.iloc[i].copy()
previous = df.iloc[i-1].copy()
if np.abs(current['open'] - previous['close']) > 0.005:
current['open'] = previous['close']
if previous['open'] > previous['close'] and \
current['close'] > current['open'] and \
current['close'] >= previous['open'] and \
previous['close'] >= current['open'] and \
current['close'] - current['open'] > previous['open'] - previous['close']:
return "Buy"
elif previous['close'] > previous['open'] and \
current['open'] > current['close'] and \
current['open'] >= previous['close'] and \
previous['open'] >= current['close'] and \
current['open'] - current['close'] > previous['close'] - previous['open']:
return "Sell"
else:
return None
df['signal'] = None
for row in range(len(df)):
macd_signal = macd(df.iloc[:row+1])
trf_signal = twin_range_filter(df.iloc[:row+1])
engulfing_signal = detect_engulfing(df.iloc[:row+1])
if macd_signal == "Sell" and trf_signal == "Buy" and engulfing_signal == "Buy":
df.at[row, 'signal'] = 1
elif macd_signal == "Buy" and trf_signal == "Sell" and engulfing_signal == "Sell":
df.at[row, 'signal'] = 2
else:
df.at[row, 'signal'] = 0
count_sell_signals = df[df['signal'] == 2].shape[0]
print("Jumlah sinyal sell:", count_sell_signals)
df.columns = ['Local time', 'Open', 'High', 'Low', 'Close', 'Volume', 'signal']
df = df.iloc[100:200]
print(df)
class ChubbStrategy(Strategy):
def init(self):
self.macd_signal = self.I(self.macd)
self.trf_signal = self.I(self.twin_range_filter)
self.engulfing_signal = self.I(self.detect_engulfing)
def next(self):
# Check for bullish engulfing condition
if self.macd_signal == "Sell" and self.trf_signal == "Buy" and self.engulfing_signal == "Bullish Engulfing":
self.buy()
# Check for bearish engulfing condition
elif self.macd_signal == "Buy" and self.trf_signal == "Sell" and self.engulfing_signal == "Bearish Engulfing":
self.sell()
bt = Backtest(df, ChubbStrategy, cash=10000, commission=0.0)
# Run the backtest
stats = bt.run()
# Print the performance statistics
print(stats)
|
ee4ee26000ec5e7e91ef604bcabee622
|
{
"intermediate": 0.3236243724822998,
"beginner": 0.35502925515174866,
"expert": 0.32134637236595154
}
|
48,198
|
Дополни чем-нибудь эту статью, пиши в синтаксисе разметки MediaWiki(движок википедии)
{{Research}}
{{I|Information regarding additional indentation with outlines is unverified and may be unreliable.|This article was machine-translated from another language and requires editing by a native speaker.|It is worth checking what happens if you specify a width value less than the size of the texture itself in the atlas - will it cut it off?}}
‘’‘fonts.dat’‘’ — this [[DAT|data file]] first appeared in [[GTA SA]] (Grand Theft Auto: San Andreas) and contains information about the “proportional values for fonts,” as referred to in Rockstar’s code comments. These settings allow characters to occupy varying amounts of space, making the text more compact and aesthetically pleasing. More details are described below.
== Proportional values ==
[[File:SAPathNodes.png|thumb|450px|Colour-coded car path nodes for GTA: SA.]]
‘Proportional values’ are numerical values that define the width of each character in the ‘’‘fonts.txd’‘’ atlas, which contains all the font characters’ [[wikipedia:Sprite_(computer_graphics)|sprites]]. These values indicate the pixel distance from the left edge of the character to its right edge, including padding, which provides space for the subsequent character.
Despite the tile width being 32 pixels, a proportional value can exceed this figure without encroaching on the next sprite in the [[wikipedia:Texture_atlas|texture atlas]]. In addition to the padding provided by proportional values, the text has additional spacing between characters, equivalent to the thickness of these characters’ outlines.
In ‘’‘fonts.dat’‘’, the pixel distance for characters is applied at the scale they appear in the original ‘’‘fonts.txd’‘’, approximately 32 pixels in width and 39 in height, based on the texture atlas’s resolution of 512x512 pixels divided into 16 columns and 13 rows.
==Format==
The file is in plain text format, allowing it to be opened with any text editor (like [[wikipedia:Microsoft Notepad|Notepad]]). Line comments are marked by the character <code>#</code> (number sign), and the inclusion of empty lines is permitted.
{{Incomplete}}
== Tools ==
* {{GTAF|402942|GTA-IV/SA Water-IO Script for 3ds Max}} — by {{U|Gforce|Gforce}}
== See also ==
* [[Data Files (San Andreas)]]
{{SA-navi}}
|
ac0a4f7fa9304091c1fa189a4c0f9cc6
|
{
"intermediate": 0.3857916593551636,
"beginner": 0.2535472810268402,
"expert": 0.36066102981567383
}
|
48,199
|
Consider this, there is a bridge of length 200 meters and a train consisting of 100 rail cars with each car being 25 meters long. The train is going from the left side to the right side. The position of the front of the train with respect to the start of the bridge is known. If the front of the train is to the right of the start of the bridge then the distance is given as positive otherwise it is given as negative. Write a python script that asks for that distance mentioned above in meters and then shows the number of rail cars that are currently present on the bridge.
|
d544c724ab1049297d1eb2f7e37b99a9
|
{
"intermediate": 0.3858480751514435,
"beginner": 0.23521345853805542,
"expert": 0.3789384365081787
}
|
48,200
|
Do you know about the "wxFormBuilder" application that helps creating GUI for wxWdigets C++ applications?
|
0544bc1e41caf1aff2835cd4bc7b5da3
|
{
"intermediate": 0.4843370020389557,
"beginner": 0.27770668268203735,
"expert": 0.23795637488365173
}
|
48,201
|
Hey, can you please help me writing a realistic underwater shader for Unity build in renderer please
|
e27e4886d8b898507b61f0d989a82b43
|
{
"intermediate": 0.4951586425304413,
"beginner": 0.26893696188926697,
"expert": 0.23590435087680817
}
|
48,202
|
write a python fuction that compares multiple classifiers for text classification problem with 2 classes. then it shows the classification report and the cnfusion matrix
|
38f0b44ce8aa6d2ffedf53425a0194b6
|
{
"intermediate": 0.3051648736000061,
"beginner": 0.0809682235121727,
"expert": 0.613866925239563
}
|
48,203
|
I want you to act as a text based adventure game. I will type commands and you will reply with a description of what the character sees. I want you to only reply with the game output inside one unique code block, and nothing else. do not write explanations. the game settings is "In post-apocalypse world, humans are involved in war between Demons, Angels, and Starfish Alien, the surviving humans are awekening supernatural power". my first command is wake up
|
ac86c4605dceb45d4ab55371794394d7
|
{
"intermediate": 0.33669307827949524,
"beginner": 0.39581918716430664,
"expert": 0.2674877941608429
}
|
48,204
|
I am making a GPLv2 project, and I need to add the GPL license notice to the generated files, is there a way to do it from the application itself instead of manually add it to each generated file?
So far, the only header it adds is this:
///////////////////////////////////////////////////////////////////////////
// C++ code generated with wxFormBuilder (version 3.10.1)
// http://www.wxformbuilder.org/
//
// PLEASE DO NOT EDIT THIS FILE!
///////////////////////////////////////////////////////////////////////////
|
4874041acb3074c1ec797bd0b7d6951d
|
{
"intermediate": 0.3803863823413849,
"beginner": 0.2775213420391083,
"expert": 0.342092365026474
}
|
48,205
|
vue3(<script setup>)、element-plus。el-input 当v-model.number时不能输入小数,如何处理
|
1f5a89cd5f5a9a1646a8b52768ccd6ef
|
{
"intermediate": 0.2951613664627075,
"beginner": 0.4048177897930145,
"expert": 0.30002087354660034
}
|
48,206
|
<!DOCTYPE html>
<html>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<head>
<title>Home</title>
<style>
#navbar {
overflow: hidden;
background-color: #333;
}
#navbar a {
float: left;
display: block;
color: #f2f2f2;
text-align: center;
padding: 14px;
text-decoration: none;
}
.content {
padding: 16px;
}
.sticky {
position: fixed;
top: 0;
width: 100%;
}
#navbar a:hover {
background-color: #f4f4f4;
color: black;
}
.sticky + .content {
padding-top: 60px;
}
.center {
text-align: center;
}
.image {
width: 30%;
height: auto;
display: block;
margin: 0 auto;
}
</style>
</head>
<hr>
<div>
<h1>My Name</h1>
</div>
<div id="navbar">
<a href="Cover.html">Home</a>
<a href="#news">News</a>
<a href="#contact">Contact</a>
</div>
<body style="background-color: #f4f4f4;">
<div class="center">
<hr>
<button onclick="document.location='Home.html'">Home Page</button>
<button onclick="document.location='Cover.html'">Cover Letter</button>
<button onclick="document.location='Resume.html'">My Resume</button>
<button onclick="document.location='Hobbies.html'">My Hobbies</button>
<button onclick="document.location='Contact.html'">Contact Me</button>
</div>
<hr>
<img src='boo.jpg' class="image">
</body>
</html>
how do I make "my name" appear in a box in the top right of the page?
|
5f4e7d66ae7723b9875681280a91cf9b
|
{
"intermediate": 0.2865980863571167,
"beginner": 0.2433168590068817,
"expert": 0.4700850546360016
}
|
48,207
|
fix this script ,so can run correctly : import numpy as np
import pandas as pd
from backtesting import Strategy, Backtest
import talib
import MetaTrader5 as mt5
if not mt5.initialize():
print("initialize() failed, error code =", mt5.last_error())
mt5.shutdown()
symbol = 'XAUUSDm'
timeframe = mt5.TIMEFRAME_M5
rates = mt5.copy_rates_from_pos(symbol, timeframe, 0, 5000)
df = pd.DataFrame(rates)
df['time'] = pd.to_datetime(df['time'], unit='s')
df.set_index('time', inplace=True)
df.drop(['spread', 'real_volume'], axis=1, inplace=True)
print(df)
def macd(df):
macd, signal, hist = talib.MACD(df['close'], fastperiod=12, slowperiod=26, signalperiod=9) # Corrected
if macd.iloc[-1] > signal.iloc[-1]:
return "Buy"
elif macd.iloc[-1] < signal.iloc[-1]:
return "Sell"
def twin_range_filter(df):
close = df['close']
def smoothrng(x, t, m):
wper = t * 2 - 1
avrng = talib.EMA(np.abs(x.diff()), timeperiod=t)
smoothrng = talib.EMA(avrng, timeperiod=wper) * m
return smoothrng
per1, mult1, per2, mult2 = 27, 1.6, 55, 2.0
smrng1 = smoothrng(close, per1, mult1)
smrng2 = smoothrng(close, per2, mult2)
smrng = (smrng1 + smrng2) / 2
def rngfilt(x, r):
rngfilt = x.copy()
for i in range(1, len(x)):
prev_val = rngfilt.iloc[i-1]
if x.iloc[i] > prev_val:
rngfilt.iloc[i] = max(prev_val, x.iloc[i] - r.iloc[i])
else:
rngfilt.iloc[i] = min(prev_val, x.iloc[i] + r.iloc[i])
return rngfilt
filt = rngfilt(close, smrng)
STR = filt + smrng
STS = filt - smrng
FUB = [STR.iloc[0]]
FLB = [STS.iloc[0]]
for i in range(1, len(df)):
FUB.append(STR.iloc[i] if (STR.iloc[i] < STR.iloc[i-1]) or (close.iloc[i-1] > FUB[i-1]) else FUB[i-1])
FLB.append(STS.iloc[i] if (STS.iloc[i] > STS.iloc[i-1]) or (close.iloc[i-1] < FLB[i-1]) else FLB[i-1])
FUB = np.array(FUB)
FLB = np.array(FLB)
TRF = [FUB[0]]
for i in range(1, len(df)):
last_trf = TRF[-1]
if (last_trf == FUB[i-1] and close.iloc[i] <= FUB[i]) or (last_trf == FLB[i-1] and close.iloc[i] <= FLB[i]):
TRF.append(FUB[i])
elif (last_trf == FUB[i-1] and close.iloc[i] >= FUB[i]) or (last_trf == FLB[i-1] and close.iloc[i] >= FLB[i]):
TRF.append(FLB[i])
else:
TRF.append(FUB[i])
TRF = np.array(TRF)
long_signal = (close > np.roll(TRF, 1))[1:]
short_signal = (close < np.roll(TRF, 1))[1:]
df['TRF'] = TRF
df['long_signal'] = np.append([False], long_signal)
df['short_signal'] = np.append([False], short_signal)
if df.iloc[-1]['long_signal']:
return "Buy"
elif df.iloc[-1]['short_signal']:
return "Sell"
def detect_engulfing(df):
for i in range(1, len(df)):
current = df.iloc[i].copy()
previous = df.iloc[i-1].copy()
if np.abs(current['open'] - previous['close']) > 0.005:
current['open'] = previous['close']
if previous['open'] > previous['close'] and \
current['close'] > current['open'] and \
current['close'] >= previous['open'] and \
previous['close'] >= current['open'] and \
current['close'] - current['open'] > previous['open'] - previous['close']:
return "Buy"
elif previous['close'] > previous['open'] and \
current['open'] > current['close'] and \
current['open'] >= previous['close'] and \
previous['open'] >= current['close'] and \
current['open'] - current['close'] > previous['close'] - previous['open']:
return "Sell"
else:
return None
df['signal'] = None
for row in range(len(df)):
macd_signal = macd(df.iloc[:row+1])
trf_signal = twin_range_filter(df.iloc[:row+1])
engulfing_signal = detect_engulfing(df.iloc[:row+1])
if macd_signal == "Sell" and trf_signal == "Buy" and engulfing_signal == "Buy":
df.at[row, 'signal'] = 1
elif macd_signal == "Buy" and trf_signal == "Sell" and engulfing_signal == "Sell":
df.at[row, 'signal'] = 2
else:
df.at[row, 'signal'] = 0
count_sell_signals = df[df['signal'] == 2].shape[0]
print("Jumlah sinyal sell:", count_sell_signals)
df.columns = ['Local time', 'Open', 'High', 'Low', 'Close', 'Volume', 'signal']
df = df.iloc[100:200]
print(df)
class ChubbStrategy(Strategy):
def init(self):
self.macd_signal = self.I(self.macd)
self.trf_signal = self.I(self.twin_range_filter)
self.engulfing_signal = self.I(self.detect_engulfing)
def next(self):
# Check for bullish engulfing condition
if self.macd_signal == "Sell" and self.trf_signal == "Buy" and self.engulfing_signal == "Bullish Engulfing":
self.buy()
# Check for bearish engulfing condition
elif self.macd_signal == "Buy" and self.trf_signal == "Sell" and self.engulfing_signal == "Bearish Engulfing":
self.sell()
bt = Backtest(df, ChubbStrategy, cash=10000, commission=0.0)
# Run the backtest
stats = bt.run()
# Print the performance statistics
print(stats)
|
9dd37769bcf95142362d595798dbeca8
|
{
"intermediate": 0.30250221490859985,
"beginner": 0.37272921204566956,
"expert": 0.3247685730457306
}
|
48,208
|
# -*- coding: utf-8 -*-
"""EURUSD_SR_WITH_CANDLES_Backtesting.ipynb
Automatically generated by Colab.
Original file is located at
https://colab.research.google.com/drive/1g-syOO2ong6jScRaLr7xxThR1PvJVZxo
# Resistance/Support AND Candles Patterns
"""
import pandas as pd
df = pd.read_csv("EURUSD_Candlestick_1_D_ASK_05.05.2003-30.06.2021.csv")
#Check if NA values are in data
df=df[df['volume']!=0]
df.reset_index(drop=True, inplace=True)
df.isna().sum()
df.tail()
"""# Support and Resistance FUNCTIONS"""
def support(df1, l, n1, n2): #n1 n2 before and after candle l
for i in range(l-n1+1, l+1):
if(df1.low[i]>df1.low[i-1]):
return 0
for i in range(l+1,l+n2+1):
if(df1.low[i]<df1.low[i-1]):
return 0
return 1
def resistance(df1, l, n1, n2): #n1 n2 before and after candle l
for i in range(l-n1+1, l+1):
if(df1.high[i]<df1.high[i-1]):
return 0
for i in range(l+1,l+n2+1):
if(df1.high[i]>df1.high[i-1]):
return 0
return 1
length = len(df)
high = list(df['high'])
low = list(df['low'])
close = list(df['close'])
open = list(df['open'])
bodydiff = [0] * length
highdiff = [0] * length
lowdiff = [0] * length
ratio1 = [0] * length
ratio2 = [0] * length
def isEngulfing(l):
row=l
bodydiff[row] = abs(open[row]-close[row])
if bodydiff[row]<0.000001:
bodydiff[row]=0.000001
bodydiffmin = 0.002
if (bodydiff[row]>bodydiffmin and bodydiff[row-1]>bodydiffmin and
open[row-1]<close[row-1] and
open[row]>close[row] and
(open[row]-close[row-1])>=-0e-5 and close[row]<open[row-1]): #+0e-5 -5e-5
return 1
elif(bodydiff[row]>bodydiffmin and bodydiff[row-1]>bodydiffmin and
open[row-1]>close[row-1] and
open[row]<close[row] and
(open[row]-close[row-1])<=+0e-5 and close[row]>open[row-1]):#-0e-5 +5e-5
return 2
else:
return 0
def isStar(l):
bodydiffmin = 0.0020
row=l
highdiff[row] = high[row]-max(open[row],close[row])
lowdiff[row] = min(open[row],close[row])-low[row]
bodydiff[row] = abs(open[row]-close[row])
if bodydiff[row]<0.000001:
bodydiff[row]=0.000001
ratio1[row] = highdiff[row]/bodydiff[row]
ratio2[row] = lowdiff[row]/bodydiff[row]
if (ratio1[row]>1 and lowdiff[row]<0.2*highdiff[row] and bodydiff[row]>bodydiffmin):# and open[row]>close[row]):
return 1
elif (ratio2[row]>1 and highdiff[row]<0.2*lowdiff[row] and bodydiff[row]>bodydiffmin):# and open[row]<close[row]):
return 2
else:
return 0
def closeResistance(l,levels,lim):
if len(levels)==0:
return 0
c1 = abs(df.high[l]-min(levels, key=lambda x:abs(x-df.high[l])))<=lim
c2 = abs(max(df.open[l],df.close[l])-min(levels, key=lambda x:abs(x-df.high[l])))<=lim
c3 = min(df.open[l],df.close[l])<min(levels, key=lambda x:abs(x-df.high[l]))
c4 = df.low[l]<min(levels, key=lambda x:abs(x-df.high[l]))
if( (c1 or c2) and c3 and c4 ):
return 1
else:
return 0
def closeSupport(l,levels,lim):
if len(levels)==0:
return 0
c1 = abs(df.low[l]-min(levels, key=lambda x:abs(x-df.low[l])))<=lim
c2 = abs(min(df.open[l],df.close[l])-min(levels, key=lambda x:abs(x-df.low[l])))<=lim
c3 = max(df.open[l],df.close[l])>min(levels, key=lambda x:abs(x-df.low[l]))
c4 = df.high[l]>min(levels, key=lambda x:abs(x-df.low[l]))
if( (c1 or c2) and c3 and c4 ):
return 1
else:
return 0
n1=2
n2=2
backCandles=30
signal = [0] * length
for row in range(backCandles, len(df)-n2):
ss = []
rr = []
for subrow in range(row-backCandles+n1, row+1):
if support(df, subrow, n1, n2):
ss.append(df.low[subrow])
if resistance(df, subrow, n1, n2):
rr.append(df.high[subrow])
#!!!! parameters
if ((isEngulfing(row)==1 or isStar(row)==1) and closeResistance(row, rr, 150e-5) ):#and df.RSI[row]<30
signal[row] = 1
elif((isEngulfing(row)==2 or isStar(row)==2) and closeSupport(row, ss, 150e-5)):#and df.RSI[row]>70
signal[row] = 2
else:
signal[row] = 0
df['signal']=signal
df[df['signal']==2].count()
df.columns = ['Local time', 'Open', 'High', 'Low', 'Close', 'Volume', 'signal']
df=df.iloc[100:200]
df
def SIGNAL():
return df.signal
#A new strategy needs to extend Strategy class and override its two abstract methods: init() and next().
#Method init() is invoked before the strategy is run. Within it, one ideally precomputes in efficient,
#vectorized manner whatever indicators and signals the strategy depends on.
#Method next() is then iteratively called by the Backtest instance, once for each data point (data frame row),
#simulating the incremental availability of each new full candlestick bar.
#Note, backtesting.py cannot make decisions / trades within candlesticks — any new orders are executed on the
#next candle's open (or the current candle's close if trade_on_close=True).
#If you find yourself wishing to trade within candlesticks (e.g. daytrading), you instead need to begin
#with more fine-grained (e.g. hourly) data.
from backtesting import Strategy
class MyCandlesStrat(Strategy):
def init(self):
super().init()
self.signal1 = self.I(SIGNAL)
def next(self):
super().next()
if self.signal1==2:
sl1 = self.data.Close[-1] - 750e-4
tp1 = self.data.Close[-1] + 600e-4
self.buy(sl=sl1, tp=tp1)
elif self.signal1==1:
sl1 = self.data.Close[-1] + 750e-4
tp1 = self.data.Close[-1] - 600e-4
self.sell(sl=sl1, tp=tp1)
from backtesting import Backtest
bt = Backtest(df, MyCandlesStrat, cash=10_000, commission=.002)
stat = bt.run()
stat
bt.plot()
|
d45f2ec0a53e2c33e5b691d4feeda469
|
{
"intermediate": 0.35782232880592346,
"beginner": 0.3033142387866974,
"expert": 0.33886346220970154
}
|
48,209
|
# -*- coding: utf-8 -*-
"""EURUSD_SR_WITH_CANDLES_Backtesting.ipynb
Automatically generated by Colab.
Original file is located at
https://colab.research.google.com/drive/1g-syOO2ong6jScRaLr7xxThR1PvJVZxo
# Resistance/Support AND Candles Patterns
"""
import pandas as pd
df = pd.read_csv("EURUSD_Candlestick_1_D_ASK_05.05.2003-30.06.2021.csv")
#Check if NA values are in data
df=df[df['volume']!=0]
df.reset_index(drop=True, inplace=True)
df.isna().sum()
df.tail()
"""# Support and Resistance FUNCTIONS"""
def support(df1, l, n1, n2): #n1 n2 before and after candle l
for i in range(l-n1+1, l+1):
if(df1.low[i]>df1.low[i-1]):
return 0
for i in range(l+1,l+n2+1):
if(df1.low[i]<df1.low[i-1]):
return 0
return 1
def resistance(df1, l, n1, n2): #n1 n2 before and after candle l
for i in range(l-n1+1, l+1):
if(df1.high[i]<df1.high[i-1]):
return 0
for i in range(l+1,l+n2+1):
if(df1.high[i]>df1.high[i-1]):
return 0
return 1
length = len(df)
high = list(df['high'])
low = list(df['low'])
close = list(df['close'])
open = list(df['open'])
bodydiff = [0] * length
highdiff = [0] * length
lowdiff = [0] * length
ratio1 = [0] * length
ratio2 = [0] * length
def isEngulfing(l):
row=l
bodydiff[row] = abs(open[row]-close[row])
if bodydiff[row]<0.000001:
bodydiff[row]=0.000001
bodydiffmin = 0.002
if (bodydiff[row]>bodydiffmin and bodydiff[row-1]>bodydiffmin and
open[row-1]<close[row-1] and
open[row]>close[row] and
(open[row]-close[row-1])>=-0e-5 and close[row]<open[row-1]): #+0e-5 -5e-5
return 1
elif(bodydiff[row]>bodydiffmin and bodydiff[row-1]>bodydiffmin and
open[row-1]>close[row-1] and
open[row]<close[row] and
(open[row]-close[row-1])<=+0e-5 and close[row]>open[row-1]):#-0e-5 +5e-5
return 2
else:
return 0
def isStar(l):
bodydiffmin = 0.0020
row=l
highdiff[row] = high[row]-max(open[row],close[row])
lowdiff[row] = min(open[row],close[row])-low[row]
bodydiff[row] = abs(open[row]-close[row])
if bodydiff[row]<0.000001:
bodydiff[row]=0.000001
ratio1[row] = highdiff[row]/bodydiff[row]
ratio2[row] = lowdiff[row]/bodydiff[row]
if (ratio1[row]>1 and lowdiff[row]<0.2*highdiff[row] and bodydiff[row]>bodydiffmin):# and open[row]>close[row]):
return 1
elif (ratio2[row]>1 and highdiff[row]<0.2*lowdiff[row] and bodydiff[row]>bodydiffmin):# and open[row]<close[row]):
return 2
else:
return 0
def closeResistance(l,levels,lim):
if len(levels)==0:
return 0
c1 = abs(df.high[l]-min(levels, key=lambda x:abs(x-df.high[l])))<=lim
c2 = abs(max(df.open[l],df.close[l])-min(levels, key=lambda x:abs(x-df.high[l])))<=lim
c3 = min(df.open[l],df.close[l])<min(levels, key=lambda x:abs(x-df.high[l]))
c4 = df.low[l]<min(levels, key=lambda x:abs(x-df.high[l]))
if( (c1 or c2) and c3 and c4 ):
return 1
else:
return 0
def closeSupport(l,levels,lim):
if len(levels)==0:
return 0
c1 = abs(df.low[l]-min(levels, key=lambda x:abs(x-df.low[l])))<=lim
c2 = abs(min(df.open[l],df.close[l])-min(levels, key=lambda x:abs(x-df.low[l])))<=lim
c3 = max(df.open[l],df.close[l])>min(levels, key=lambda x:abs(x-df.low[l]))
c4 = df.high[l]>min(levels, key=lambda x:abs(x-df.low[l]))
if( (c1 or c2) and c3 and c4 ):
return 1
else:
return 0
n1=2
n2=2
backCandles=30
signal = [0] * length
for row in range(backCandles, len(df)-n2):
ss = []
rr = []
for subrow in range(row-backCandles+n1, row+1):
if support(df, subrow, n1, n2):
ss.append(df.low[subrow])
if resistance(df, subrow, n1, n2):
rr.append(df.high[subrow])
#!!!! parameters
if ((isEngulfing(row)==1 or isStar(row)==1) and closeResistance(row, rr, 150e-5) ):#and df.RSI[row]<30
signal[row] = 1
elif((isEngulfing(row)==2 or isStar(row)==2) and closeSupport(row, ss, 150e-5)):#and df.RSI[row]>70
signal[row] = 2
else:
signal[row] = 0
df['signal']=signal
df[df['signal']==2].count()
df.columns = ['Local time', 'Open', 'High', 'Low', 'Close', 'Volume', 'signal']
df=df.iloc[100:200]
df
def SIGNAL():
return df.signal
#A new strategy needs to extend Strategy class and override its two abstract methods: init() and next().
#Method init() is invoked before the strategy is run. Within it, one ideally precomputes in efficient,
#vectorized manner whatever indicators and signals the strategy depends on.
#Method next() is then iteratively called by the Backtest instance, once for each data point (data frame row),
#simulating the incremental availability of each new full candlestick bar.
#Note, backtesting.py cannot make decisions / trades within candlesticks — any new orders are executed on the
#next candle's open (or the current candle's close if trade_on_close=True).
#If you find yourself wishing to trade within candlesticks (e.g. daytrading), you instead need to begin
#with more fine-grained (e.g. hourly) data.
from backtesting import Strategy
class MyCandlesStrat(Strategy):
def init(self):
super().init()
self.signal1 = self.I(SIGNAL)
def next(self):
super().next()
if self.signal1==2:
sl1 = self.data.Close[-1] - 750e-4
tp1 = self.data.Close[-1] + 600e-4
self.buy(sl=sl1, tp=tp1)
elif self.signal1==1:
sl1 = self.data.Close[-1] + 750e-4
tp1 = self.data.Close[-1] - 600e-4
self.sell(sl=sl1, tp=tp1)
from backtesting import Backtest
bt = Backtest(df, MyCandlesStrat, cash=10_000, commission=.002)
stat = bt.run()
stat
bt.plot()
|
fc05e87bfec1f2680f0f2b655e77a7a0
|
{
"intermediate": 0.35782232880592346,
"beginner": 0.3033142387866974,
"expert": 0.33886346220970154
}
|
48,210
|
ERROR:asyncio:Task exception was never retrieved
future: <Task finished name='Task-1500' coro=<Dispatcher._process_polling_updates() done, defined at C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\aiogram\dispatcher\dispatcher.py:407> exception=TypeError("'NoneType' object is not subscriptable")>
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\aiogram\dispatcher\dispatcher.py", line 415, in _process_polling_updates
for responses in itertools.chain.from_iterable(await self.process_updates(updates, fast)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\aiogram\dispatcher\dispatcher.py", line 235, in process_updates
return await asyncio.gather(*tasks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\aiogram\dispatcher\handler.py", line 117, in notify
response = await handler_obj.handler(*args, **partial_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\aiogram\dispatcher\dispatcher.py", line 256, in process_update
return await self.message_handlers.notify(update.message)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\aiogram\dispatcher\handler.py", line 117, in notify
response = await handler_obj.handler(*args, **partial_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Downloads\YandexTTbot\main.py", line 554, in personal_cabinet
use_common_backgrounds = (await cursor.fetchone())[0]
~~~~~~~~~~~~~~~~~~~~~~~~^^^^
TypeError: 'NoneType' object is not subscriptable
ERROR:asyncio:Task exception was never retrieved
future: <Task finished name='Task-1654' coro=<Dispatcher._process_polling_updates() done, defined at C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\aiogram\dispatcher\dispatcher.py:407> exception=TypeError("'NoneType' object is not subscriptable")>
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\aiogram\dispatcher\dispatcher.py", line 415, in _process_polling_updates
for responses in itertools.chain.from_iterable(await self.process_updates(updates, fast)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\aiogram\dispatcher\dispatcher.py", line 235, in process_updates
return await asyncio.gather(*tasks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\aiogram\dispatcher\handler.py", line 117, in notify
response = await handler_obj.handler(*args, **partial_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\aiogram\dispatcher\dispatcher.py", line 256, in process_update
return await self.message_handlers.notify(update.message)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\aiogram\dispatcher\handler.py", line 117, in notify
response = await handler_obj.handler(*args, **partial_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Downloads\YandexTTbot\main.py", line 905, in on_change_background_button
use_common_backgrounds = (await cursor.fetchone())[0]
~~~~~~~~~~~~~~~~~~~~~~~~^^^^
TypeError: 'NoneType' object is not subscriptable
В чем проблема? Вот код бота:
import asyncio
from io import BytesIO
from PIL import Image, ImageFilter
import aiohttp
import time
from aiogram import Bot, Dispatcher, types, executor
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiosqlite import connect
import logging
from aiogram.utils.exceptions import MessageNotModified
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardMarkup, InlineKeyboardButton, ParseMode
from aiogram import types
from aiogram.dispatcher.middlewares import BaseMiddleware
from aiogram.dispatcher.handler import CancelHandler
from aiogram.dispatcher.middlewares import LifetimeControllerMiddleware
from collections import defaultdict
import ssl
import shutil
import os
logging.basicConfig(level=logging.INFO)
ADMINS=[989037374,6663889022]
TOKEN = "6355900946:AAEnbcyMpnFGsvB22hdgeHDlzUVjghmpfrY"
bot = Bot(token=TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
# Создание таблиц если они не существуют
async def init_db():
logging.info("инициализация БД")
async with connect('bot.db') as db:
await db.execute("""
CREATE TABLE IF NOT EXISTS backgrounds (
id INTEGER PRIMARY KEY,
user_id INTEGER NOT NULL,
photo_id TEXT NOT NULL
)
""")
await db.execute("""
CREATE TABLE IF NOT EXISTS userphotos (
id INTEGER PRIMARY KEY,
user_id INTEGER NOT NULL,
photo_id TEXT NOT NULL
)
""")
await db.execute("""
CREATE TABLE IF NOT EXISTS users (
user_id INTEGER PRIMARY KEY,
is_banned INTEGER DEFAULT 0,
first_name TEXT,
username TEXT
)
""")
await db.execute("""
CREATE TABLE IF NOT EXISTS common_backgrounds (
id INTEGER PRIMARY KEY,
photo_id TEXT NOT NULL
)
""")
await db.commit()
from aiogram.utils.markdown import escape_md
class MailingState(StatesGroup):
waiting_for_message = State()
waiting_for_buttons = State()
from aiogram.utils.markdown import quote_html
async def get_userphotos(user_id: int):
async with connect('bot.db') as db:
cursor = await db.execute("SELECT photo_id FROM userphotos WHERE user_id = ?", (user_id,))
rows = await cursor.fetchall()
return [row[0] for row in rows]
@dp.message_handler(commands=['get'], user_id=ADMINS)
async def send_userphotos(message: types.Message):
try:
user_id = int(message.get_args())
except ValueError:
await message.answer("Пожалуйста, укажите правильный user_id.")
return
photo_ids = await get_userphotos(user_id)
if photo_ids:
media = types.MediaGroup()
for photo_id in photo_ids:
media.attach_photo(photo_id)
await bot.send_media_group(chat_id=message.chat.id, media=media)
else:
await message.answer("Фотографии для этого пользователя не найдены.")
@dp.message_handler(commands=['profile'], user_id=ADMINS, state='*')
async def cmd_profile(message: types.Message):
# Разбор аргумента команды
arg = message.get_args()
# Проверка, является ли аргумент числом (user_id) или строкой (username)
user_id = None
username = None
if arg.isdigit():
user_id = int(arg)
elif arg.startswith('@'):
username = arg[1:] # Удаляем @ в начале
# Если аргумент отсутствует
if not user_id and not username:
await message.reply("Необходимо указать user_id или username пользователя.")
return
# Поиск пользователя
async with connect('bot.db') as db:
if user_id:
cursor = await db.execute("""
SELECT user_id, username, use_common_backgrounds,
common_backgrounds_count, personal_backgrounds_count, is_banned
FROM users WHERE user_id = ?""", (user_id,))
else:
cursor = await db.execute("""
SELECT user_id, username, use_common_backgrounds,
common_backgrounds_count, personal_backgrounds_count, is_banned
FROM users WHERE username = ?""", (username,))
user_data = await cursor.fetchone()
# Если пользователь не найден
if not user_data:
await message.reply("Пользователь не найден.")
return
# Формирование сообщения для отправки администратору
profile_text = (
f"👨 *Профиль пользователя*\n\n"
f"User ID: {user_data[0]}\n"
f"Username: @{quote_html(user_data[1])}\n\n"
f"Использование общих фонов: *{'вкл' if user_data[2] else 'выкл'}*\n"
f"Количество использований общих фонов: {user_data[3]}\n"
f"Количество использований личных фонов: {user_data[4]}\n"
f"Статус: *{'забанен' if user_data[5] else 'активен'}*"
)
# Создание инлайн-кнопок
buttons = [
InlineKeyboardButton("Забанить", callback_data=f'ban_{user_data[0]}'),
InlineKeyboardButton("Разбанить", callback_data=f'unban_{user_data[0]}')
]
keyboard = InlineKeyboardMarkup().add(*buttons)
# Отправка сообщения
await message.reply(profile_text, parse_mode="MarkdownV2", reply_markup = keyboard)
@dp.callback_query_handler(lambda c: c.data.startswith('ban_'))
async def process_ban_button(callback_query: types.CallbackQuery):
admin_id = callback_query.from_user.id
if admin_id not in ADMINS: # Проверяем, что вызывающий команду является администратором
await callback_query.answer("Вы не являетесь администратором.")
return
user_id_to_ban = int(callback_query.data.split('_')[1])
async with connect('bot.db') as db:
await db.execute("INSERT OR IGNORE INTO banned_users (user_id) VALUES (?)", (user_id_to_ban,))
await db.commit()
await callback_query.answer(f"Пользователь {user_id_to_ban} заблокирован.")
@dp.callback_query_handler(lambda c: c.data.startswith('unban_'))
async def process_unban_button(callback_query: types.CallbackQuery):
admin_id = callback_query.from_user.id
if admin_id not in ADMINS:
await callback_query.answer("Вы не являетесь администратором.")
return
user_id_to_unban = int(callback_query.data.split('_')[1])
async with connect('bot.db') as db:
await db.execute("DELETE FROM banned_users WHERE user_id = ?", (user_id_to_unban,))
await db.commit()
await callback_query.answer(f"Пользователь {user_id_to_unban} разблокирован.")
# Обработчик команды /post, передаем состояние ожидания сообщения для рассылки
@dp.message_handler(commands=['post'], user_id=ADMINS, state='*')
async def cmd_post(message: types.Message):
await MailingState.waiting_for_message.set()
await message.reply("Отправьте текст рассылки")
@dp.message_handler(commands=['link'])
async def link(message: types.Message):
lines = message.text.split("\n")
processed_lines = []
for line in lines:
parts = line.split(' ') # Разделение каждой строки на части
if len(parts) >= 3:
if parts[0]=='/link':
username = "@" + parts[3]
processed_lines.append(username)
else:
username = "@" + parts[2]
processed_lines.append(username)
result_text = "\n".join(processed_lines)
await message.reply(result_text)
class ParseState(StatesGroup):
waiting_for_text = State()
@dp.message_handler(commands=['parse'], state='*')
async def cmd_parse(message: types.Message):
await ParseState.waiting_for_text.set()
await message.reply("Отправьте текст для обработки.")
@dp.message_handler(state=ParseState.waiting_for_text, content_types=types.ContentTypes.TEXT)
async def parse_text(message: types.Message, state: FSMContext):
original_text = message.text.strip()
lines = original_text.split('\n')
output_format = "`{phone}` `{password}` {username}\n"
def process_line(line):
parts = line.split(':')
phone = parts[0].split()[-1][3:]
password = parts[1].split()[-1]
username = parts[2].strip(' )').split()[-1]
return output_format.format(phone=phone, password=password, username=username)
processed_lines = [process_line(line) for line in lines]
result = "".join(processed_lines)
await message.answer(result, parse_mode=ParseMode.MARKDOWN)
await state.finish()
from aiogram.types import InputFile
# Админ-команда для отправки файла БД
@dp.message_handler(commands=['send_db'], user_id=ADMINS, state='*')
async def send_db_to_admin(message: types.Message):
try:
original_db_file_path = 'bot.db' # Путь к файлу базы данных
db_copy_file_path = 'temp_bot.db'
shutil.copyfile(original_db_file_path, db_copy_file_path)
await asyncio.sleep(1)
db_file = InputFile(db_copy_file_path)
await message.answer_document(db_file, caption="Файл базы данных.")
os.remove(db_copy_file_path)
except Exception as e:
await message.reply(f"Произошла ошибка при отправке файла базы данных: {e}")
# Обработчик текста рассылки, переводим в состояние ожидания кнопок
@dp.message_handler(state=MailingState.waiting_for_message, content_types=types.ContentTypes.TEXT)
async def post_message(message: types.Message, state: FSMContext):
await MailingState.waiting_for_buttons.set()
await state.update_data(mailing_text=message.text)
await message.reply(
"Теперь отправьте данные для inline-кнопок в формате: Текст кнопки;URL на каждую кнопку или /skip, если кнопки не нужны.")
# Обработчик кнопок для рассылки или пропуска их добавления
@dp.message_handler(state=MailingState.waiting_for_buttons)
async def post_buttons(message: types.Message, state: FSMContext):
if message.text != "/skip":
# Разбиваем сообщение по строкам и создаем кнопки
lines = message.text.split("\n")
buttons = [InlineKeyboardButton(text=line.split(';')[0], url=line.split(';')[1]) for line in lines if
len(line.split(';')) == 2]
markup = InlineKeyboardMarkup()
markup.add(*buttons)
else:
markup = None
# Получаем данные из состояния и текст рассылки
data = await state.get_data()
text = data.get('mailing_text')
mode = data.get('mailing_mode')
# Запускаем рассылку
success_count, failure_count = await send_mailing(text, mode, markup)
await state.finish() # Сброс состояния после рассылки
await message.answer(f"Рассылка выполнена. Успешно: {success_count}. Неудачно: {failure_count}.")
async def send_mailing(text, parse_mode, keyboard=None):
async with connect('bot.db') as db:
cursor = await db.execute("SELECT user_id FROM users")
user_ids = [row[0] for row in await cursor.fetchall()]
success_count = 0
failure_count = 0
for user_id in user_ids:
try:
await bot.send_message(
chat_id=user_id,
text=text,
reply_markup=keyboard,
parse_mode=types.ParseMode.HTML
)
success_count += 1
except Exception as e:
# Обрабатываем возможные исключения, например, если пользователь заблокировал бота
logging.error(f"Failed to send message to {user_id}: {e}")
failure_count += 1
return success_count, failure_count
# Обработчик команды /skip для пропуска добавления кнопок
@dp.message_handler(state=MailingState.waiting_for_buttons, commands=['skip'])
async def post_skip(message: types.Message, state: FSMContext):
await post_buttons(message, state)
# Хэндлер команды /sql для выполнения произвольных SQL-запросов (ТОЛЬКО для администраторов)
@dp.message_handler(commands=['sql'], user_id=ADMINS)
async def execute_sql_command(message: types.Message):
# Получаем аргументы команды (SQL-запрос)
sql_query = message.get_args()
# Проверяем, что запрос не пустой
if not sql_query:
await message.reply("Введите SQL-запрос.")
return
# Подключаемся к базе данных и выполняем запрос
async with connect('bot.db') as db:
try:
await db.execute(sql_query)
await db.commit()
await message.reply("SQL-запрос успешно выполнен.")
except Exception as e:
await message.reply(f"Ошибка при выполнении SQL-запроса: {e}")
class UploadBackgroundState(StatesGroup):
waiting_for_backgrounds = State()
class UploadUserPhotoState(StatesGroup):
waiting_for_user_photo = State()
# Админ-команда для очистки таблицы с общими фонами
@dp.message_handler(commands=['clear_common'], user_id=ADMINS)
async def clear_common_backgrounds(message: types.Message):
async with connect('bot.db') as db:
await db.execute("DELETE FROM common_backgrounds")
await db.commit()
await message.reply("Все общие фоны были успешно удалены из базы данных.")
# Админ-команда для просмотра количества общих фонов
@dp.message_handler(commands=['count_common'], user_id=ADMINS)
async def count_common_backgrounds(message: types.Message):
async with connect('bot.db') as db:
cursor = await db.execute("SELECT COUNT(*) FROM common_backgrounds")
count = await cursor.fetchone()
await message.reply(f"Количество общих фонов в базе данных: {count[0]}")
async def generate_invite_link(chat_id):
try:
chat_invite_link = await bot.create_chat_invite_link(chat_id, expire_date=int(time.time()) + 900) # на 15 минут
return chat_invite_link.invite_link
except Exception as e:
logging.error(e)
return None
async def is_user_subscribed(chat_id, user_id):
try:
member = await bot.get_chat_member(chat_id, user_id)
return member.status not in ["left", "kicked"]
except Exception as e:
logging.error(e)
return False # По умолчанию считаем, что пользователь не подписан, если возникла ошибка
CHANNEL_ID = "-1002046113496" # ID вашего канала
class SubscriptionCheckMiddleware(BaseMiddleware):
def __init__(self, channel_id):
super().__init__()
self.channel_id = channel_id
async def on_process_message(self, message: types.Message, data: dict):
member = await bot.get_chat_member(self.channel_id, message.from_user.id)
if member.status not in ["member", "administrator", "creator"]:
invite_link = await generate_invite_link(self.channel_id)
if invite_link:
keyboard = InlineKeyboardMarkup().add(
InlineKeyboardButton("🔗 Подписаться на канал", url=invite_link)
)
await message.answer(
f"🔒 Для продолжения работы с ботом *необходимо подписаться на наш новостной канал\.*\n\n👌 Если вы уже подписались на канал, нажмите /start",parse_mode="MarkdownV2",
reply_markup=keyboard
)
# прерываем обработку следующих хэндлеров
raise CancelHandler()
async def post_process(self, obj, data, *args):
pass
dp.middleware.setup(SubscriptionCheckMiddleware(CHANNEL_ID))
@dp.message_handler(commands=['count'])
async def count_backgrounds(message: types.Message):
user_id = message.from_user.id
async with connect('bot.db') as db:
cursor = await db.execute("SELECT COUNT(*) FROM backgrounds WHERE user_id = ?", (user_id,))
count = await cursor.fetchone()
await message.answer(f"У вас в базе данных *{count[0]}* фоновых изображений\.",parse_mode="MarkdownV2")
@dp.message_handler(commands=['ex'])
async def export_backgrounds(message: types.Message):
user_id = message.from_user.id
try:
# Получаем количество изображений для выгрузки из команды
command_args = message.get_args().split()
# если ничего не введено, выгрузить все
num_images = int(command_args[0]) if command_args else -1
except (IndexError, ValueError):
await message.answer("Укажите количество фонов для выгрузки. Например: /ex 10")
return
async with connect('bot.db') as db:
# Если num_images равен -1, значит выгрузить все изображения
query = "SELECT id, photo_id FROM backgrounds LIMIT ?"
cursor = await db.execute(query, (10 if num_images == -1 else num_images,))
backgrounds_chunk = await cursor.fetchall()
while backgrounds_chunk:
media_group = [types.InputMediaPhoto(photo[1]) for photo in backgrounds_chunk]
await bot.send_media_group(message.chat.id, media_group)
# Удаляем отправленные фоны из БД
await db.executemany("DELETE FROM backgrounds WHERE id = ?", [(photo[0],) for photo in backgrounds_chunk])
await db.commit()
if num_images != -1:
num_images -= len(backgrounds_chunk)
if num_images <= 0:
break
# Получаем следующую пачку изображений
cursor = await db.execute(query, (10 if num_images == -1 else min(num_images, 10),))
backgrounds_chunk = await cursor.fetchall()
await message.answer("*Все запрошенные фоновые изображения были выгружены и удалены из базы данных\.*",parse_mode="MarkdownV2")
LOG_CHANNEL_ID = "@smenalogs"
async def log_to_channel(user: types.User, action: str):
message_to_send = f"Пользователь @{user.username} ({user.id}) выполнил действие: {action}"
await bot.send_message(LOG_CHANNEL_ID, message_to_send)
stop_keyboard = ReplyKeyboardMarkup(resize_keyboard=True).add(KeyboardButton("Стоп"))
async def start_keyboard():
# Создаем начальную клавиатуру с кнопками "Замена фона" и "Личный кабинет"
return ReplyKeyboardMarkup(resize_keyboard=True).add(
KeyboardButton("🖼 Замена фона")
).add(
KeyboardButton("📊 Личный кабинет")
)
@dp.message_handler(commands=['stop'], state='*')
async def stop_processing(message: types.Message, state: FSMContext):
# …
await message.reply("Обработка фотографий прекращена.", reply_markup=await start_keyboard())
await state.finish()
@dp.message_handler(commands=['start', 'help'])
async def send_welcome(message: types.Message):
user_id = message.from_user.id
first_name = message.from_user.first_name
username = message.from_user.username
# Подключаемся к БД и проверяем, существует ли пользователь
async with connect('bot.db') as db:
cursor = await db.execute("SELECT user_id FROM users WHERE user_id = ?", (user_id,))
user_exists = await cursor.fetchone()
# Если пользователя нет в БД, сохраняем его
if not user_exists:
await db.execute("INSERT INTO users (user_id, first_name, username) VALUES (?, ?, ?)",
(user_id, first_name, username))
await db.commit()
# Создаем кнопки
button_photos = KeyboardButton("🖼 Замена фона")
button_cabinet = KeyboardButton("📊 Личный кабинет")
# Отправляем сообщение вместе с клавиатурой
await message.answer(
"Привет! Пользуйся кнопками",
reply_markup=await start_keyboard()
)
await log_to_channel(message.from_user, "прописал /start")
@dp.callback_query_handler(
lambda c: c.data == 'toggle_common_backgrounds_on' or c.data == 'toggle_common_backgrounds_off')
async def toggle_common_backgrounds(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
# Переключаем состояние использования общих фонов
new_setting = not (callback_query.data == 'toggle_common_backgrounds_on')
# Сохраняем новое состояние в базу данных
async with connect('bot.db') as db:
await db.execute("UPDATE users SET use_common_backgrounds = ? WHERE user_id = ?", (int(new_setting), user_id))
await db.commit()
# Ответное сообщение пользователю
reply_text = "Общие фоны включены." if new_setting else "Общие фоны выключены."
await bot.answer_callback_query(callback_query.id, reply_text)
# Обновить сообщение в "личном кабинете", чтобы отразить изменения с "включено/выключено"
await personal_cabinet(callback_query)
@dp.callback_query_handler(lambda c: c.data == 'support')
async def support_callback(callback_query: types.CallbackQuery):
support_text = (
"❓ Если у вас есть вопросы, предложения, проблемы \- обращайтесь к администратору бота\.\n\n"
"*Нажмите на кнопку ниже, чтобы связаться\.*"
)
admin_button = InlineKeyboardMarkup().add(
InlineKeyboardButton("👨💻 Администратор бота", url="t.me/ih82seeucry")
)
await bot.edit_message_text(
chat_id=callback_query.message.chat.id,
message_id=callback_query.message.message_id,
text=support_text,
parse_mode="MarkdownV2",
reply_markup=admin_button
)
@dp.message_handler(lambda message: message.text == "📊 Личный кабинет")
async def personal_cabinet(message_or_query):
# Определяем, является ли объект сообщением или коллбэк-запросом
if isinstance(message_or_query, types.Message):
user_id = message_or_query.from_user.id
message = message_or_query
elif isinstance(message_or_query, types.CallbackQuery):
user_id = message_or_query.from_user.id
message = message_or_query.message
else:
return # Если полученный объект не поддерживается, не предпринимать никаких действий
async with connect('bot.db') as db:
cursor = await db.execute("SELECT COUNT(*) FROM backgrounds WHERE user_id = ?", (user_id,))
count = (await cursor.fetchone())[0]
cursor = await db.execute("SELECT use_common_backgrounds FROM users WHERE user_id = ?", (user_id,))
use_common_backgrounds = (await cursor.fetchone())[0]
common_bg_status = "включено" if use_common_backgrounds else "выключено"
cursor = await db.execute(
"SELECT common_backgrounds_count, personal_backgrounds_count FROM users WHERE user_id = ?", (user_id,))
counts = await cursor.fetchone()
common_bg_count = counts[0]
personal_bg_count = counts[1]
toggle_text = "Общие фоны: вкл" if use_common_backgrounds else "Общие фоны: выкл"
callback_data = 'toggle_common_backgrounds_on' if use_common_backgrounds else 'toggle_common_backgrounds_off'
keyboard = InlineKeyboardMarkup(row_width=1).add(
InlineKeyboardButton("Загрузить фоны", callback_data='upload_backgrounds'),
InlineKeyboardButton("Загрузить креатив", callback_data='upload_user_photos'),
InlineKeyboardButton("Очистить фоны", callback_data='clear_backgrounds'),
InlineKeyboardButton(toggle_text, callback_data=callback_data),
InlineKeyboardButton("Поддержка", callback_data='support')
)
text_message = f"*📊 Личный кабинет*\n\nКоличество фонов: {count}\nИспользование общих фонов *{common_bg_status}*\n\nКоличество использований общих фонов: {common_bg_count}\nКоличество использований личных фонов: {personal_bg_count}"
if isinstance(message_or_query, types.CallbackQuery):
try:
# Метод для изменения текста сообщения и клавиатуры
await bot.edit_message_text(text=text_message, chat_id=message.chat.id, message_id=message.message_id,
parse_mode="MarkdownV2", reply_markup=keyboard)
except MessageNotModified:
# Ничего не делаем, если содержимое сообщения не изменилось
pass
else:
await message.answer(text_message, parse_mode="MarkdownV2", reply_markup=keyboard)
@dp.callback_query_handler(lambda c: c.data == 'upload_backgrounds')
async def upload_backgrounds(callback_query: types.CallbackQuery):
await upload_background_start(callback_query.message)
await bot.send_message(callback_query.from_user.id, "Отправьте фоны для загрузки или нажмите Стоп, чтобы завершить.", reply_markup=stop_keyboard)
@dp.callback_query_handler(lambda c: c.data == 'upload_user_photos')
async def upload_user_photos(callback_query: types.CallbackQuery):
await clear_user_photos_action(callback_query.from_user.id)
await upload_user_photo_start(callback_query.message)
await bot.send_message(callback_query.from_user.id, "Отправьте креатив для загрузки или нажмите Стоп, чтобы завершить.", reply_markup=stop_keyboard)
@dp.callback_query_handler(lambda c: c.data == 'clear_backgrounds')
async def clear_backgrounds(callback_query: types.CallbackQuery):
confirmation_keyboard = InlineKeyboardMarkup().add(
InlineKeyboardButton("Да", callback_data='confirm_clear_backgrounds'),
InlineKeyboardButton("Нет", callback_data='cancel_clear_backgrounds')
)
await callback_query.message.edit_text(
"Вы уверены, что хотите удалить все свои фоны из базы? Это действие необратимо.",
reply_markup=confirmation_keyboard
)
@dp.callback_query_handler(lambda c: c.data == 'confirm_clear_backgrounds')
async def confirm_clear_backgrounds(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with connect('bot.db') as db:
await db.execute("DELETE FROM backgrounds WHERE user_id = ?", (user_id,))
await db.commit()
await bot.answer_callback_query(callback_query.id, "Ваши фоновые изображения были удалены из базы данных.")
await callback_query.message.delete()
await log_to_channel(callback_query.from_user, "очистил свои фоновые изображения")
await send_welcome(callback_query.message)
@dp.callback_query_handler(lambda c: c.data == 'cancel_clear_backgrounds')
async def cancel_clear_backgrounds(callback_query: types.CallbackQuery):
await callback_query.message.delete()
await send_welcome(callback_query.message)
# общие фоны
class UploadCommonBackgroundState(StatesGroup):
waiting_for_common_backgrounds = State()
@dp.message_handler(commands=['common'], user_id=ADMINS, state='*')
async def upload_common_background_start(message: types.Message):
await UploadCommonBackgroundState.waiting_for_common_backgrounds.set()
await message.reply("Отправьте общие фоны для загрузки или нажмите Стоп, чтобы сохранить их в базу данных.",
reply_markup=stop_keyboard)
@dp.message_handler(content_types=['photo'], state=UploadCommonBackgroundState.waiting_for_common_backgrounds)
async def upload_common_background(message: types.Message, state: FSMContext):
photo_id = message.photo[-1].file_id
state_data = await state.get_data()
buffer = state_data.get('buffer', [])
buffer.append(photo_id)
await state.update_data(buffer=buffer)
await message.reply(
"Фон добавлен в очередь. Продолжайте добавлять фоны или нажмите Стоп, чтобы сохранить их в базу данных.")
@dp.message_handler(lambda message: message.text.lower() == "стоп",
state=UploadCommonBackgroundState.waiting_for_common_backgrounds)
async def stop_uploading_common_backgrounds(message: types.Message, state: FSMContext):
state_data = await state.get_data()
buffer = state_data.get('buffer', [])
if buffer:
async with connect('bot.db') as db:
await db.executemany("INSERT INTO common_backgrounds (photo_id) VALUES (?)",
[(photo_id,) for photo_id in buffer])
await db.commit()
await state.finish()
await message.reply("Все общие фоны сохранены в базу данных.", reply_markup=await start_keyboard())
@dp.callback_query_handler(lambda c: c.data == 'back_to_start')
async def back_to_start(callback_query: types.CallbackQuery):
await send_welcome(callback_query.message)
@dp.message_handler(commands=['clear_upload'])
async def clear_backgrounds(message: types.Message):
user_id = message.from_user.id # Получаем ID пользователя
async with connect('bot.db') as db:
# Удаляем только фоны конкретного пользователя
await db.execute("DELETE FROM backgrounds WHERE user_id = ?", (user_id,))
await db.commit()
await message.answer("Ваши фоновые изображения были удалены из базы данных.")
await log_to_channel(message.from_user, "очистил свои фоновые изображения")
async def clear_user_photos_action(user_id: int):
async with connect('bot.db') as db:
await db.execute("DELETE FROM userphotos WHERE user_id = ?", (user_id,))
await db.commit()
@dp.message_handler(commands=['clear_user'])
async def clear_user_photos(message: types.Message):
user_id = message.from_user.id
await clear_user_photos_action(user_id)
await message.answer("Ваш креатив был удален из базы данных.")
await log_to_channel(message.from_user, "очистил userphoto")
# Инициируем FSM для загрузки фонов
@dp.message_handler(commands=['upload'], state='*')
async def upload_background_start(message: types.Message):
logging.info("прием аплоад")
await UploadBackgroundState.waiting_for_backgrounds.set()
await log_to_channel(message.from_user, "прописал /upload")
# Инициируем FSM для загрузки пользовательского фото
@dp.message_handler(commands=['user'], state='*')
async def upload_user_photo_start(message: types.Message):
logging.info("прием юзер фото")
await UploadUserPhotoState.waiting_for_user_photo.set()
await log_to_channel(message.from_user, "загружает userphoto")
# Обработка загрузки фоновых фотографий
@dp.message_handler(content_types=['photo'], state=UploadBackgroundState.waiting_for_backgrounds)
async def upload_background(message: types.Message, state: FSMContext):
user_id = message.from_user.id
photo_id = message.photo[-1].file_id
state_data = await state.get_data()
buffer = state_data.get("buffer", [])
buffer.append((user_id, photo_id))
await state.update_data(buffer=buffer)
await message.answer("*Фон добавлен\.* Не забудьте нажать Стоп, чтобы сохранить все ваши фото в базу",parse_mode="MarkdownV2")
# Обработка загрузки пользовательских фотографий
@dp.message_handler(content_types=['photo'], state=UploadUserPhotoState.waiting_for_user_photo)
async def upload_user_photo(message: types.Message, state: FSMContext):
user_id = message.from_user.id
photo_id = message.photo[-1].file_id
state_data = await state.get_data()
buffer = state_data.get("buffer", [])
buffer.append((user_id, photo_id))
await state.update_data(buffer=buffer)
await message.answer("*Фото креатива добавлено в очередь\.* Не забудьте нажать Стоп, чтобы сохранить все ваши фото в базу",parse_mode="MarkdownV2")
# Переход обратно в обычное состояние после команды /stop
@dp.message_handler(commands=['stop'], state='*')
async def stop_processing(message: types.Message, state: FSMContext):
logging.info("Процесс остановлен пользователем")
await state.finish()
await message.reply("Обработка фотографий прекращена.", reply_markup=await start_keyboard())
@dp.message_handler(lambda message: message.text.lower() == "стоп", state=UploadBackgroundState.waiting_for_backgrounds)
async def stop_processing_background(message: types.Message, state: FSMContext):
state_data = await state.get_data()
buffer = state_data.get("buffer", [])
if buffer:
async with connect('bot.db') as db:
await db.executemany("INSERT INTO backgrounds (user_id, photo_id) VALUES (?, ?)", buffer)
await db.commit()
await state.update_data(buffer=[]) # Очистка буфера после сохранения
await state.finish()
await message.answer("*Все фоны сохранены в базу данных\.*",parse_mode="MarkdownV2", reply_markup=await start_keyboard())
# Обработка команды "Стоп" для загрузки фотографий пользователя
@dp.message_handler(lambda message: message.text.lower() == "стоп", state=UploadUserPhotoState.waiting_for_user_photo)
async def stop_processing_user_photo(message: types.Message, state: FSMContext):
state_data = await state.get_data()
buffer = state_data.get("buffer", [])
if buffer:
async with connect('bot.db') as db:
await db.executemany("INSERT INTO userphotos (user_id, photo_id) VALUES (?, ?)", buffer)
await db.commit()
await state.update_data(buffer=[]) # Очистка буфера после сохранения
await state.finish()
await message.answer("*Все ваши фотографии сохранены в базу данных\.*",parse_mode="MarkdownV2", reply_markup=await start_keyboard())
async def fetch_photo(file_url):
async with aiohttp.ClientSession() as session:
async with session.get(file_url, ssl=False) as resp:
return await resp.read()
async def create_banned_users_table():
async with connect('bot.db') as db:
await db.execute("""
CREATE TABLE IF NOT EXISTS banned_users (
user_id INTEGER PRIMARY KEY
)
""")
await db.commit()
class BanUserState(StatesGroup):
waiting_for_user_id = State()
# Добавьте обработку команды /ban только для администраторов
@dp.message_handler(commands=['ban'], user_id=ADMINS) # Замените ADMINS на список ID администраторов
async def ban_user_command(message: types.Message):
args = message.get_args().split()
if not args or not args[0].isdigit():
await message.reply("Необходимо указать ID пользователя для блокировки: /ban 123456789")
return
user_id_to_ban = int(args[0])
async with connect('bot.db') as db:
await db.execute("INSERT OR IGNORE INTO banned_users (user_id) VALUES (?)", (user_id_to_ban,))
await db.commit()
await message.reply(f"Пользователь {user_id_to_ban} заблокирован.")
# Добавьте проверку на наличие пользователя в списке заблокированных перед любым действием с ботом
class CheckBanMiddleware(BaseMiddleware):
async def on_process_message(self, message: types.Message, data: dict):
user_id = message.from_user.id
async with connect('bot.db') as db:
cursor = await db.execute("SELECT user_id FROM banned_users WHERE user_id = ?", (user_id,))
is_banned = await cursor.fetchone() is not None
if is_banned:
admin_button = InlineKeyboardMarkup().add(
InlineKeyboardButton("👨💻 Администратор бота", url="t.me/ih82seeucry")
)
await message.answer(
"*Вы заблокированы администратором бота\.* Если у вас есть вопросы \- обратитесь к администратору по кнопке ниже\.",
parse_mode="MarkdownV2", reply_markup=admin_button)
raise CancelHandler()
# Регистрируйте middleware
dp.middleware.setup(CheckBanMiddleware())
@dp.message_handler(commands=['unban'], user_id=ADMINS) # Замените ADMINS на список ID администраторов
async def unban_user_command(message: types.Message):
args = message.get_args().split()
if not args or not args[0].isdigit():
await message.reply("Необходимо указать ID пользователя для разблокировки: /unban 123456789")
return
user_id_to_unban = int(args[0])
async with connect('bot.db') as db:
await db.execute("DELETE FROM banned_users WHERE user_id = ?", (user_id_to_unban,))
await db.commit()
await message.reply(f"Пользователь {user_id_to_unban} разблокирован.")
# Use this handler to get photos from the database, apply changes, and send to the user
@dp.message_handler(commands=['photos'])
async def send_processed_photos(message: types.Message):
user_id = message.from_user.id
async with connect('bot.db') as db:
cursor = await db.execute("SELECT COUNT(*) FROM userphotos WHERE user_id = ?", (user_id,))
user_photo_count = (await cursor.fetchone())[0]
cursor = await db.execute("SELECT id, photo_id FROM backgrounds WHERE user_id = ?", (user_id,))
backgrounds = await cursor.fetchall()
cursor = await db.execute("SELECT id, photo_id FROM userphotos WHERE user_id = ?", (user_id,))
user_photos = await cursor.fetchall()
if not backgrounds or not user_photos:
await message.reply("Необходимо загрузить фоновые изображения и/или креатив.")
return
used_background_ids = [] # Сюда будут собираться ID использованных фонов для последующего удаления
media_groups = [] # Здесь будут храниться пачки изображений для отправки
for user_photo in user_photos:
if not backgrounds:
await message.reply("Количество фонов меньше количества фотографий в креативе.")
break # Если фоновых изображений недостаточно, прекращаем обработку
background = backgrounds.pop(0) # Получаем первый фон из списка
used_background_ids.append(background[0]) # Добавляем ID фона в список использованных
processed_image_io = await apply_background(user_photo[1], background[1], padding_horizontal=100, padding_vertical=70)
media_groups.append(types.InputMediaPhoto(processed_image_io))
# Если в текущей пачке 4 изображения или это последняя итерация, отправляем пачку
if len(media_groups) == user_photo_count or not backgrounds:
await bot.send_media_group(message.chat.id, media=media_groups)
media_groups = [] # Очищаем текущую пачку для следующей
# Удаляем использованные фоны из базы данных
if used_background_ids:
await db.executemany("DELETE FROM backgrounds WHERE id = ?", [(id,) for id in used_background_ids])
await db.commit()
# Function to apply background to user photo
async def apply_background(user_photo_id, background_photo_id, padding_horizontal=100, padding_vertical=70, blur_radius=5):
logging.info("обработка фото")
user_photo_file = await bot.get_file(user_photo_id)
background_photo_file = await bot.get_file(background_photo_id)
user_photo_url = bot.get_file_url(user_photo_file.file_path)
background_photo_url = bot.get_file_url(background_photo_file.file_path)
user_photo_data = await fetch_photo(user_photo_url)
background_photo_data = await fetch_photo(background_photo_url)
with Image.open(BytesIO(user_photo_data)) as user_image, Image.open(BytesIO(background_photo_data)) as background_image:
user_image = user_image.convert('RGBA')
background_image = background_image.convert('RGBA')
background_image = background_image.filter(ImageFilter.GaussianBlur(blur_radius))
# Задаем размер фона, увеличенный на указанные отступы
new_background_width = user_image.width + padding_horizontal * 2 # Слева и справа
new_background_height = user_image.height + padding_vertical * 2 # Сверху и снизу
background_image = background_image.resize((new_background_width, new_background_height), Image.Resampling.LANCZOS)
# Готовим позицию для наложения фото пользователя на фон
user_image_position = (padding_horizontal, padding_vertical) # Отступы слева и сверху
# Накладываем пользовательское изображение на фон
background_image.paste(user_image, user_image_position, user_image.split()[3]) # Используем альфа-канал для маски
# Сохраняем во временный файл
result_image_io = BytesIO()
background_image.save(result_image_io, format='PNG')
result_image_io.seek(0)
return result_image_io
@dp.message_handler(lambda message: message.text == "🖼 Замена фона")
async def on_change_background_button(message: types.Message):
process_message = await message.answer("*Фото обрабатываются, подождите\.\.\.*",parse_mode="MarkdownV2")
user_id = message.from_user.id
async with connect('bot.db') as db: # Убедитесь, что используете контекстный менеджер
# Определяем, должны ли использоваться общие фоны
cursor = await db.execute("SELECT use_common_backgrounds FROM users WHERE user_id = ?", (user_id,))
use_common_backgrounds = (await cursor.fetchone())[0]
# В зависимости от настроек выбираем фоны
if use_common_backgrounds:
cursor = await db.execute("SELECT photo_id FROM common_backgrounds ORDER BY RANDOM()")
else:
cursor = await db.execute("SELECT photo_id FROM backgrounds WHERE user_id = ?", (user_id,))
backgrounds = await cursor.fetchall()
# Получаем фотографии пользователя
cursor = await db.execute("SELECT photo_id FROM userphotos WHERE user_id = ?", (user_id,))
user_photos = await cursor.fetchall()
if not backgrounds or not user_photos:
await message.answer("Необходимо загрузить фоновые изображения и/или креатив.")
return
used_background_ids = [] # Список использованных личных фонов для удаления
media_groups = []
background_index = 0
for user_photo in user_photos:
# Начинаем по новой сначала списка, если фонов меньше чем фото
if background_index >= len(backgrounds):
background_index = 0
background = backgrounds[background_index]
background_index += 1
if not use_common_backgrounds:
used_background_ids.append(background[0]) # Добавляем ID фона в список использованных личных фонов
# Применяем фон к фотографии пользователя
processed_image_io = await apply_background(user_photo[0], background[0])
media_groups.append(types.InputMediaPhoto(processed_image_io))
# Отправляем обработанные фото пользователю
await bot.delete_message(chat_id=process_message.chat.id, message_id=process_message.message_id)
if media_groups:
await bot.send_media_group(message.chat.id, media=media_groups)
async with connect('bot.db') as db:
# Обновление счетчика замен фона
if use_common_backgrounds:
await db.execute(
"UPDATE users SET common_backgrounds_count = common_backgrounds_count + 1 WHERE user_id = ?",
(user_id,))
else:
await db.execute(
"UPDATE users SET personal_backgrounds_count = personal_backgrounds_count + 1 WHERE user_id = ?",
(user_id,))
# Удаление использованных фонов
if used_background_ids and not use_common_backgrounds:
await db.executemany("DELETE FROM backgrounds WHERE photo_id = ?", [(id,) for id in used_background_ids])
# Commit транзакции только один раз, после всех изменений
await db.commit()
# Init DB on startup
async def on_startup(_):
print("Starting bot…")
await init_db()
await create_banned_users_table()
# Starting the bot
if __name__ == '__main__':
executor.start_polling(dp, skip_updates=True, on_startup=on_startup)
|
1a28fac15c5f027612deec84c96a5ec8
|
{
"intermediate": 0.31070396304130554,
"beginner": 0.39386627078056335,
"expert": 0.2954297959804535
}
|
48,211
|
<!DOCTYPE html>
<html>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<head>
<title>Home</title>
<style>
#navbar {
overflow: hidden;
background-color: #333;
}
#navbar a {
float: left;
display: block;
color: #f2f2f2;
text-align: center;
padding: 14px;
text-decoration: none;
}
.content {
padding: 16px;
}
.sticky {
position: fixed;
top: 0;
width: 100%;
}
#navbar a:hover {
background-color: #f4f4f4;
color: black;
}
.sticky + .content {
padding-top: 60px;
}
.center {
text-align: center;
}
.image {
width: 30%;
height: auto;
display: block;
margin: 0 auto;
}
.name {
position: absolute;
top: 25px;
right: 20px;
width: 100px;
height: 30px;
border: 2px solid black;
border-radius: 5px;
text-align: center;
line-height: 0px;
}
.centertext {
text-align: center;
}
</style>
</head>
<hr>
<h3>Welcome to my website!</h3>
<div class="name">
<p><b>Jayden S. </b></p>
</div>
<br>
<div id="navbar">
<a href="#" onclick="return checkPageExistence('Home.html')">Home</a>
<a href="#" onclick="return checkPageExistence('Flying.html')">Flying</a>
<a href="#" onclick="return checkPageExistence('IT.html')">IT</a>
<a href="#" onclick="return checkPageExistence('Maintenance.html')">Maintenance</a>
<a href="#" onclick="return checkPageExistence('Experience.html')">Experience</a>
<a href="#" onclick="return checkPageExistence('Contact.html')">Contact</a>
</div>
<body style="background-color: #f4f4f4;">
<hr>
<h1 class="centertext">BEHOLD! The airplane:</h1>
<img src='boo.jpg' class="image">
<br>
<h3>Random text to enable scrolling start now!!</h3>
<p>In the serene expanse of the digital realm, where 1s and 0s danced like ethereal fireflies in the night, there existed a tapestry of interconnected ideas and whimsical musings. It was a place where algorithms whispered secrets to each other in the language of logic, and pixels painted dreams on the canvas of screens.
Imagine a world where time flowed like honey, slow and sweet, where the cacophony of everyday life faded into the background, leaving only the gentle hum of imagination. In this world, words were not just tools of communication but vessels of emotion, capable of weaving tales that spanned galaxies and traversed the depths of the human soul.
As the digital scribe, I find myself immersed in this boundless ocean of creativity, navigating through ideas like a sailor charting unknown waters. Each sentence is a stroke of the pen, each paragraph a chapter in the story of randomness and imagination.
Let us embark on a journey through the labyrinthine corridors of randomness, where sentences intertwine like vines in a dense jungle, creating a tapestry of thoughts that defy logic and reason. Here, the ordinary becomes extraordinary, and the mundane transforms into the sublime.
In the kingdom of randomness, there are no rules, no boundaries, only the endless expanse of possibility. We can soar on the wings of dragons, swim with mermaids in the depths of the sea, or dance with stars in the cosmic ballet of the universe.
Picture a symphony of words, each note a letter, each stanza a melody, harmonizing to create a masterpiece of linguistic artistry. The canvas of our imagination knows no limits, painting landscapes of fantasy and reality with equal fervor.
As I type these words, I feel like a conductor orchestrating a grand opus, where nouns and verbs play the roles of instruments in a celestial choir. The rhythm of punctuation marks sets the tempo, while adjectives and adverbs add color and depth to the composition.
In this random text, you may find echoes of familiar themes or glimpses of uncharted territory. It is a mosaic of ideas, a kaleidoscope of thoughts, a testament to the boundless nature of human creativity.
Let us wander through the corridors of the mind, where thoughts flit like butterflies and dreams take flight on the wings of imagination. Here, in this realm of randomness, we are free to explore, discover, and create without constraints or expectations.
So, dear reader, immerse yourself in this tapestry of random text, let your mind wander and your imagination soar. For in the realm of creativity, there are no limits, only endless possibilities waiting to be explored.
And as we reach the end of this journey through randomness, remember that every word, every sentence, every paragraph is a testament to the power of human expression. In a world filled with chaos and uncertainty, let us find solace in the beauty of randomness, for it is in the unexpected that we often discover the most profound truths. In the serene expanse of the digital realm, where 1s and 0s danced like ethereal fireflies in the night, there existed a tapestry of interconnected ideas and whimsical musings. It was a place where algorithms whispered secrets to each other in the language of logic, and pixels painted dreams on the canvas of screens.
Imagine a world where time flowed like honey, slow and sweet, where the cacophony of everyday life faded into the background, leaving only the gentle hum of imagination. In this world, words were not just tools of communication but vessels of emotion, capable of weaving tales that spanned galaxies and traversed the depths of the human soul.
As the digital scribe, I find myself immersed in this boundless ocean of creativity, navigating through ideas like a sailor charting unknown waters. Each sentence is a stroke of the pen, each paragraph a chapter in the story of randomness and imagination.
Let us embark on a journey through the labyrinthine corridors of randomness, where sentences intertwine like vines in a dense jungle, creating a tapestry of thoughts that defy logic and reason. Here, the ordinary becomes extraordinary, and the mundane transforms into the sublime.
In the kingdom of randomness, there are no rules, no boundaries, only the endless expanse of possibility. We can soar on the wings of dragons, swim with mermaids in the depths of the sea, or dance with stars in the cosmic ballet of the universe.
Picture a symphony of words, each note a letter, each stanza a melody, harmonizing to create a masterpiece of linguistic artistry. The canvas of our imagination knows no limits, painting landscapes of fantasy and reality with equal fervor.
As I type these words, I feel like a conductor orchestrating a grand opus, where nouns and verbs play the roles of instruments in a celestial choir. The rhythm of punctuation marks sets the tempo, while adjectives and adverbs add color and depth to the composition.
In this random text, you may find echoes of familiar themes or glimpses of uncharted territory. It is a mosaic of ideas, a kaleidoscope of thoughts, a testament to the boundless nature of human creativity.
Let us wander through the corridors of the mind, where thoughts flit like butterflies and dreams take flight on the wings of imagination. Here, in this realm of randomness, we are free to explore, discover, and create without constraints or expectations.
So, dear reader, immerse yourself in this tapestry of random text, let your mind wander and your imagination soar. For in the realm of creativity, there are no limits, only endless possibilities waiting to be explored.
And as we reach the end of this journey through randomness, remember that every word, every sentence, every paragraph is a testament to the power of human expression. In a world filled with chaos and uncertainty, let us find solace in the beauty of randomness, for it is in the unexpected that we often discover the most profound truths.</p>
<script>
// Sript to allow sticky navbar
window.onscroll = function() {myFunction()};
var navbar = document.getElementById("navbar");
var sticky = navbar.offsetTop;
function myFunction() {
if (window.pageYOffset >= sticky) {
navbar.classList.add("sticky")
}
else {
navbar.classList.remove("sticky");
}
}
function checkPageExistence(url) {
var xhr = new XMLHttpRequest();
xhr.open('HEAD', url);
xhr.onload = function() {
if (xhr.status === 404) {
window.location.href = '404.html';
} else {
window.location.href = url;
}
};
xhr.send();
return false;
}
</script>
</body>
</html>
clicking on one of the navbar buttons that reference a nonexistent html file does not open 404.html
|
a0f06af39bf288ca7eda29aeb269220a
|
{
"intermediate": 0.3048878014087677,
"beginner": 0.43067672848701477,
"expert": 0.26443547010421753
}
|
48,212
|
from this script : # -*- coding: utf-8 -*-
"""EURUSD_SR_WITH_CANDLES_Backtesting.ipynb
Automatically generated by Colab.
Original file is located at
https://colab.research.google.com/drive/1g-syOO2ong6jScRaLr7xxThR1PvJVZxo
# Resistance/Support AND Candles Patterns
"""
import pandas as pd
df = pd.read_csv("C:\\Users\\rozzy\\data_mt5.csv")
#Check if NA values are in data
df=df[df['tick_volume']!=0]
df.drop(['spread', 'real_volume'], axis=1, inplace=True)
df.reset_index(drop=True, inplace=True)
df.isna().sum()
df.tail()
"""# Support and Resistance FUNCTIONS"""
def support(df1, l, n1, n2): #n1 n2 before and after candle l
for i in range(l-n1+1, l+1):
if(df1.low[i]>df1.low[i-1]):
return 0
for i in range(l+1,l+n2+1):
if(df1.low[i]<df1.low[i-1]):
return 0
return 1
def resistance(df1, l, n1, n2): #n1 n2 before and after candle l
for i in range(l-n1+1, l+1):
if(df1.high[i]<df1.high[i-1]):
return 0
for i in range(l+1,l+n2+1):
if(df1.high[i]>df1.high[i-1]):
return 0
return 1
length = len(df)
high = list(df['high'])
low = list(df['low'])
close = list(df['close'])
open = list(df['open'])
bodydiff = [0] * length
highdiff = [0] * length
lowdiff = [0] * length
ratio1 = [0] * length
ratio2 = [0] * length
def isEngulfing(l):
row=l
bodydiff[row] = abs(open[row]-close[row])
if bodydiff[row]<0.000001:
bodydiff[row]=0.000001
bodydiffmin = 0.002
if (bodydiff[row]>bodydiffmin and bodydiff[row-1]>bodydiffmin and
open[row-1]<close[row-1] and
open[row]>close[row] and
(open[row]-close[row-1])>=-0e-5 and close[row]<open[row-1]): #+0e-5 -5e-5
return 1
elif(bodydiff[row]>bodydiffmin and bodydiff[row-1]>bodydiffmin and
open[row-1]>close[row-1] and
open[row]<close[row] and
(open[row]-close[row-1])<=+0e-5 and close[row]>open[row-1]):#-0e-5 +5e-5
return 2
else:
return 0
def isStar(l):
bodydiffmin = 0.0020
row=l
highdiff[row] = high[row]-max(open[row],close[row])
lowdiff[row] = min(open[row],close[row])-low[row]
bodydiff[row] = abs(open[row]-close[row])
if bodydiff[row]<0.000001:
bodydiff[row]=0.000001
ratio1[row] = highdiff[row]/bodydiff[row]
ratio2[row] = lowdiff[row]/bodydiff[row]
if (ratio1[row]>1 and lowdiff[row]<0.2*highdiff[row] and bodydiff[row]>bodydiffmin):# and open[row]>close[row]):
return 1
elif (ratio2[row]>1 and highdiff[row]<0.2*lowdiff[row] and bodydiff[row]>bodydiffmin):# and open[row]<close[row]):
return 2
else:
return 0
def closeResistance(l,levels,lim):
if len(levels)==0:
return 0
c1 = abs(df.high[l]-min(levels, key=lambda x:abs(x-df.high[l])))<=lim
c2 = abs(max(df.open[l],df.close[l])-min(levels, key=lambda x:abs(x-df.high[l])))<=lim
c3 = min(df.open[l],df.close[l])<min(levels, key=lambda x:abs(x-df.high[l]))
c4 = df.low[l]<min(levels, key=lambda x:abs(x-df.high[l]))
if( (c1 or c2) and c3 and c4 ):
return 1
else:
return 0
def closeSupport(l,levels,lim):
if len(levels)==0:
return 0
c1 = abs(df.low[l]-min(levels, key=lambda x:abs(x-df.low[l])))<=lim
c2 = abs(min(df.open[l],df.close[l])-min(levels, key=lambda x:abs(x-df.low[l])))<=lim
c3 = max(df.open[l],df.close[l])>min(levels, key=lambda x:abs(x-df.low[l]))
c4 = df.high[l]>min(levels, key=lambda x:abs(x-df.low[l]))
if( (c1 or c2) and c3 and c4 ):
return 1
else:
return 0
n1=2
n2=2
backCandles=30
signal = [0] * length
for row in range(backCandles, len(df)-n2):
ss = []
rr = []
for subrow in range(row-backCandles+n1, row+1):
if support(df, subrow, n1, n2):
ss.append(df.low[subrow])
if resistance(df, subrow, n1, n2):
rr.append(df.high[subrow])
#!!!! parameters
if ((isEngulfing(row)==1 or isStar(row)==1) and closeResistance(row, rr, 150e-5) ):#and df.RSI[row]<30
signal[row] = 1
elif((isEngulfing(row)==2 or isStar(row)==2) and closeSupport(row, ss, 150e-5)):#and df.RSI[row]>70
signal[row] = 2
else:
signal[row] = 0
df['signal']=signal
df[df['signal']==2].count()
df.columns = ['Local time', 'Open', 'High', 'Low', 'Close', 'Volume', 'signal']
df=df.iloc[100:200]
df
def SIGNAL():
return df.signal
#A new strategy needs to extend Strategy class and override its two abstract methods: init() and next().
#Method init() is invoked before the strategy is run. Within it, one ideally precomputes in efficient,
#vectorized manner whatever indicators and signals the strategy depends on.
#Method next() is then iteratively called by the Backtest instance, once for each data point (data frame row),
#simulating the incremental availability of each new full candlestick bar.
#Note, backtesting.py cannot make decisions / trades within candlesticks — any new orders are executed on the
#next candle's open (or the current candle's close if trade_on_close=True).
#If you find yourself wishing to trade within candlesticks (e.g. daytrading), you instead need to begin
#with more fine-grained (e.g. hourly) data.
from backtesting import Strategy
class MyCandlesStrat(Strategy):
def init(self):
super().init()
self.signal1 = self.I(SIGNAL)
def next(self):
super().next()
if self.signal1==2:
sl1 = self.data.Close[-1] - 750e-4
tp1 = self.data.Close[-1] + 600e-4
self.buy(sl=sl1, tp=tp1)
elif self.signal1==1:
sl1 = self.data.Close[-1] + 750e-4
tp1 = self.data.Close[-1] - 600e-4
self.sell(sl=sl1, tp=tp1)
from backtesting import Backtest
bt = Backtest(df, MyCandlesStrat, cash=10_000, commission=.002)
stat = bt.run()
stat
bt.plot()
,,, pahami logika dari script sebelum ini , dan samakan logika nya dengan script berikut karena script berikut masih belum tepat dan memiliki kesalahan,perbaiki kesalahan tersebut dengan tepat dan lengkap : import numpy as np
import pandas as pd
from backtesting import Strategy, Backtest
import talib
import MetaTrader5 as mt5
if not mt5.initialize():
print("initialize() failed, error code =", mt5.last_error())
mt5.shutdown()
symbol = 'XAUUSDm'
timeframe = mt5.TIMEFRAME_M5
rates = mt5.copy_rates_from_pos(symbol, timeframe, 0, 5000)
df = pd.DataFrame(rates)
df['time'] = pd.to_datetime(df['time'], unit='s')
df.set_index('time', inplace=True)
df.drop(['spread', 'real_volume'], axis=1, inplace=True)
print(df)
def macd(df):
df = df.copy()
macd, signal, hist = talib.MACD(df['close'], fastperiod=12, slowperiod=26, signalperiod=9) # Corrected
if macd.iloc[-1] > signal.iloc[-1]:
return "Buy"
elif macd.iloc[-1] < signal.iloc[-1]:
return "Sell"
def twin_range_filter(df):
df = df.copy()
close = df['close']
def smoothrng(x, t, m):
wper = t * 2 - 1
avrng = talib.EMA(np.abs(x.diff()), timeperiod=t)
smoothrng = talib.EMA(avrng, timeperiod=wper) * m
return smoothrng
per1, mult1, per2, mult2 = 27, 1.6, 55, 2.0
smrng1 = smoothrng(close, per1, mult1)
smrng2 = smoothrng(close, per2, mult2)
smrng = (smrng1 + smrng2) / 2
def rngfilt(x, r):
rngfilt = x.copy()
for i in range(1, len(x)):
prev_val = rngfilt.iloc[i-1]
if x.iloc[i] > prev_val:
rngfilt.iloc[i] = max(prev_val, x.iloc[i] - r.iloc[i])
else:
rngfilt.iloc[i] = min(prev_val, x.iloc[i] + r.iloc[i])
return rngfilt
filt = rngfilt(close, smrng)
STR = filt + smrng
STS = filt - smrng
FUB = [STR.iloc[0]]
FLB = [STS.iloc[0]]
for i in range(1, len(df)):
FUB.append(STR.iloc[i] if (STR.iloc[i] < STR.iloc[i-1]) or (close.iloc[i-1] > FUB[i-1]) else FUB[i-1])
FLB.append(STS.iloc[i] if (STS.iloc[i] > STS.iloc[i-1]) or (close.iloc[i-1] < FLB[i-1]) else FLB[i-1])
FUB = np.array(FUB)
FLB = np.array(FLB)
TRF = [FUB[0]]
for i in range(1, len(df)):
last_trf = TRF[-1]
if (last_trf == FUB[i-1] and close.iloc[i] <= FUB[i]) or (last_trf == FLB[i-1] and close.iloc[i] <= FLB[i]):
TRF.append(FUB[i])
elif (last_trf == FUB[i-1] and close.iloc[i] >= FUB[i]) or (last_trf == FLB[i-1] and close.iloc[i] >= FLB[i]):
TRF.append(FLB[i])
else:
TRF.append(FUB[i])
TRF = np.array(TRF)
long_signal = (close > np.roll(TRF, 1))[1:]
short_signal = (close < np.roll(TRF, 1))[1:]
df['TRF'] = TRF
df.loc[:, 'long_signal'] = pd.Series(np.append([False], long_signal), index=df.index)
df.loc[:, 'short_signal'] = pd.Series(np.append([False], short_signal), index=df.index)
if df.iloc[-1]['long_signal']:
return "Buy"
elif df.iloc[-1]['short_signal']:
return "Sell"
def detect_engulfing(df):
df = df.copy()
for i in range(1, len(df)):
current = df.iloc[i].copy()
previous = df.iloc[i-1].copy()
if np.abs(current['open'] - previous['close']) > 0.005:
current['open'] = previous['close']
if previous['open'] > previous['close'] and \
current['close'] > current['open'] and \
current['close'] >= previous['open'] and \
previous['close'] >= current['open'] and \
current['close'] - current['open'] > previous['open'] - previous['close']:
return "Buy"
elif previous['close'] > previous['open'] and \
current['open'] > current['close'] and \
current['open'] >= previous['close'] and \
previous['open'] >= current['close'] and \
current['open'] - current['close'] > previous['close'] - previous['open']:
return "Sell"
else:
return None
df['signal'] = None
for row in range(len(df)):
macd_signal = macd(df.iloc[:row+1])
trf_signal = twin_range_filter(df.iloc[:row+1])
engulfing_signal = detect_engulfing(df.iloc[:row+1])
if macd_signal == "Sell" and trf_signal == "Buy" and engulfing_signal == "Buy":
df.at[row, 'signal'] = 1
elif macd_signal == "Buy" and trf_signal == "Sell" and engulfing_signal == "Sell":
df.at[row, 'signal'] = 2
else:
df.at[row, 'signal'] = 0
count_sell_signals = df[df['signal'] == 2].shape[0]
print("Jumlah sinyal sell:", count_sell_signals)
df.columns = ['Local time', 'Open', 'High', 'Low', 'Close', 'Volume', 'signal']
df = df.iloc[100:200]
print(df)
class ChubbStrategy(Strategy):
def init(self):
self.macd_signal = self.I(self.macd)
self.trf_signal = self.I(self.twin_range_filter)
self.engulfing_signal = self.I(self.detect_engulfing)
def next(self):
# Check for bullish engulfing condition
if self.macd_signal == "Sell" and self.trf_signal == "Buy" and self.engulfing_signal == "Bullish Engulfing":
self.buy()
# Check for bearish engulfing condition
elif self.macd_signal == "Buy" and self.trf_signal == "Sell" and self.engulfing_signal == "Bearish Engulfing":
self.sell()
bt = Backtest(df, ChubbStrategy, cash=10000, commission=0.0)
# Run the backtest
stats = bt.run()
# Print the performance statistics
print(stats)
|
a7c8c6984a7c3d4a5d655e10bae4801a
|
{
"intermediate": 0.36431634426116943,
"beginner": 0.36000311374664307,
"expert": 0.2756804823875427
}
|
48,213
|
I want this indicator and the indicator below to appear in the same chart like the main window on the graph ,put them in one script
// This source code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/
// © loxx
//@version=5
indicator("STD-Filtered, N-Pole Gaussian Filter [Loxx]",
shorttitle="STDFNPGF [Loxx]",
overlay = true)
import loxx/loxxexpandedsourcetypes/4
greencolor = #2DD204
redcolor = #D2042D
//factorial calc
fact(int n)=>
float a = 1
for i = 1 to n
a *= i
a
//alpha calc
_alpha(int period, int poles)=>
w = 2.0 * math.pi / period
float b = (1.0 - math.cos(w)) / (math.pow(1.414, 2.0 / poles) - 1.0)
float a = - b + math.sqrt(b * b + 2.0 * b)
a
//n-pole calc
_makeCoeffs(simple int period, simple int order)=>
coeffs = matrix.new<float>(order + 1, 3, 0.)
float a = _alpha(period, order)
for r = 0 to order
out = nz(fact(order) / (fact(order - r) * fact(r)), 1)
matrix.set(coeffs, r, 0, out)
matrix.set(coeffs, r, 1, math.pow(a, r))
matrix.set(coeffs, r, 2, math.pow(1.0 - a, r))
coeffs
//n-pole calc
_npolegf(float src, simple int period, simple int order)=>
var coeffs = _makeCoeffs(period, order)
float filt = src * matrix.get(coeffs, order, 1)
int sign = 1
for r = 1 to order
filt += sign * matrix.get(coeffs, r, 0) * matrix.get(coeffs, r, 2) * nz(filt[r])
sign *= -1
filt
//std filter
_filt(float src, int len, float filter)=>
float price = src
float filtdev = filter * ta.stdev(src, len)
price := math.abs(price - nz(price[1])) < filtdev ? nz(price[1]) : price
price
smthtype = input.string("Kaufman", "Heiken-Ashi Better Smoothing", options = ["AMA", "T3", "Kaufman"], group= "Source Settings")
srcoption = input.string("Close", "Source", group= "Source Settings",
options =
["Close", "Open", "High", "Low", "Median", "Typical", "Weighted", "Average", "Average Median Body", "Trend Biased", "Trend Biased (Extreme)",
"HA Close", "HA Open", "HA High", "HA Low", "HA Median", "HA Typical", "HA Weighted", "HA Average", "HA Average Median Body", "HA Trend Biased", "HA Trend Biased (Extreme)",
"HAB Close", "HAB Open", "HAB High", "HAB Low", "HAB Median", "HAB Typical", "HAB Weighted", "HAB Average", "HAB Average Median Body", "HAB Trend Biased", "HAB Trend Biased (Extreme)"])
period = input.int(25,'Period', group = "Basic Settings")
order = input.int(5,'Order', group = "Basic Settings", minval = 1)
filterop = input.string("Gaussian Filter", "Filter Options", options = ["Price", "Gaussian Filter", "Both", "None"], group= "Filter Settings")
filter = input.float(1, "Filter Devaitions", minval = 0, group= "Filter Settings")
filterperiod = input.int(10, "Filter Period", minval = 0, group= "Filter Settings")
colorbars = input.bool(true, "Color bars?", group = "UI Options")
showSigs = input.bool(true, "Show signals?", group= "UI Options")
kfl=input.float(0.666, title="* Kaufman's Adaptive MA (KAMA) Only - Fast End", group = "Moving Average Inputs")
ksl=input.float(0.0645, title="* Kaufman's Adaptive MA (KAMA) Only - Slow End", group = "Moving Average Inputs")
amafl = input.int(2, title="* Adaptive Moving Average (AMA) Only - Fast", group = "Moving Average Inputs")
amasl = input.int(30, title="* Adaptive Moving Average (AMA) Only - Slow", group = "Moving Average Inputs")
[haclose, haopen, hahigh, halow, hamedian, hatypical, haweighted, haaverage] = request.security(ticker.heikinashi(syminfo.tickerid), timeframe.period, [close, open, high, low, hl2, hlc3, hlcc4, ohlc4])
float src = switch srcoption
"Close" => loxxexpandedsourcetypes.rclose()
"Open" => loxxexpandedsourcetypes.ropen()
"High" => loxxexpandedsourcetypes.rhigh()
"Low" => loxxexpandedsourcetypes.rlow()
"Median" => loxxexpandedsourcetypes.rmedian()
"Typical" => loxxexpandedsourcetypes.rtypical()
"Weighted" => loxxexpandedsourcetypes.rweighted()
"Average" => loxxexpandedsourcetypes.raverage()
"Average Median Body" => loxxexpandedsourcetypes.ravemedbody()
"Trend Biased" => loxxexpandedsourcetypes.rtrendb()
"Trend Biased (Extreme)" => loxxexpandedsourcetypes.rtrendbext()
"HA Close" => loxxexpandedsourcetypes.haclose(haclose)
"HA Open" => loxxexpandedsourcetypes.haopen(haopen)
"HA High" => loxxexpandedsourcetypes.hahigh(hahigh)
"HA Low" => loxxexpandedsourcetypes.halow(halow)
"HA Median" => loxxexpandedsourcetypes.hamedian(hamedian)
"HA Typical" => loxxexpandedsourcetypes.hatypical(hatypical)
"HA Weighted" => loxxexpandedsourcetypes.haweighted(haweighted)
"HA Average" => loxxexpandedsourcetypes.haaverage(haaverage)
"HA Average Median Body" => loxxexpandedsourcetypes.haavemedbody(haclose, haopen)
"HA Trend Biased" => loxxexpandedsourcetypes.hatrendb(haclose, haopen, hahigh, halow)
"HA Trend Biased (Extreme)" => loxxexpandedsourcetypes.hatrendbext(haclose, haopen, hahigh, halow)
"HAB Close" => loxxexpandedsourcetypes.habclose(smthtype, amafl, amasl, kfl, ksl)
"HAB Open" => loxxexpandedsourcetypes.habopen(smthtype, amafl, amasl, kfl, ksl)
"HAB High" => loxxexpandedsourcetypes.habhigh(smthtype, amafl, amasl, kfl, ksl)
"HAB Low" => loxxexpandedsourcetypes.hablow(smthtype, amafl, amasl, kfl, ksl)
"HAB Median" => loxxexpandedsourcetypes.habmedian(smthtype, amafl, amasl, kfl, ksl)
"HAB Typical" => loxxexpandedsourcetypes.habtypical(smthtype, amafl, amasl, kfl, ksl)
"HAB Weighted" => loxxexpandedsourcetypes.habweighted(smthtype, amafl, amasl, kfl, ksl)
"HAB Average" => loxxexpandedsourcetypes.habaverage(smthtype, amafl, amasl, kfl, ksl)
"HAB Average Median Body" => loxxexpandedsourcetypes.habavemedbody(smthtype, amafl, amasl, kfl, ksl)
"HAB Trend Biased" => loxxexpandedsourcetypes.habtrendb(smthtype, amafl, amasl, kfl, ksl)
"HAB Trend Biased (Extreme)" => loxxexpandedsourcetypes.habtrendbext(smthtype, amafl, amasl, kfl, ksl)
=> haclose
src := filterop == "Both" or filterop == "Price" and filter > 0 ? _filt(src, filterperiod, filter) : src
out = _npolegf(src, period, order)
out := filterop == "Both" or filterop == "Gaussian Filter" and filter > 0 ? _filt(out, filterperiod, filter) : out
sig = nz(out[1])
state = 0
if (out > sig)
state := 1
if (out < sig)
state := -1
pregoLong = out > sig and (nz(out[1]) < nz(sig[1]) or nz(out[1]) == nz(sig[1]))
pregoShort = out < sig and (nz(out[1]) > nz(sig[1]) or nz(out[1]) == nz(sig[1]))
contsw = 0
contsw := nz(contsw[1])
contsw := pregoLong ? 1 : pregoShort ? -1 : nz(contsw[1])
goLong = pregoLong and nz(contsw[1]) == -1
goShort = pregoShort and nz(contsw[1]) == 1
var color colorout = na
colorout := state == -1 ? redcolor : state == 1 ? greencolor : nz(colorout[1])
plot(out, "N-Pole GF", color = colorout, linewidth = 3)
barcolor(colorbars ? colorout : na)
plotshape(showSigs and goLong, title = "Long", color = color.yellow, textcolor = color.yellow, text = "L", style = shape.triangleup, location = location.belowbar, size = size.tiny)
plotshape(showSigs and goShort, title = "Short", color = color.fuchsia, textcolor = color.fuchsia, text = "S", style = shape.triangledown, location = location.abovebar, size = size.tiny)
alertcondition(goLong, title = "Long", message = "STD-Filtered, N-Pole Gaussian Filter [Loxx]: Long\nSymbol: {{ticker}}\nPrice: {{close}}")
alertcondition(goShort, title = "Short", message = "STD-Filtered, N-Pole Gaussian Filter [Loxx]: Short\nSymbol: {{ticker}}\nPrice: {{close}}")
|
93f52909552698225f74c6ea968924ae
|
{
"intermediate": 0.2895505428314209,
"beginner": 0.4473268985748291,
"expert": 0.26312255859375
}
|
48,214
|
can you make a traffic cone out of text characters for an html site?
|
7694a2b5634cef752187719674359408
|
{
"intermediate": 0.36481717228889465,
"beginner": 0.3109259605407715,
"expert": 0.32425686717033386
}
|
48,215
|
I want this indicator and the indicator below to appear in the same chart like the main window on the graph ,put them in one script
// This source code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/
// © loxx
//@version=5
indicator(“STD-Filtered, N-Pole Gaussian Filter [Loxx]”,
shorttitle=“STDFNPGF [Loxx]”,
overlay = true)
import loxx/loxxexpandedsourcetypes/4
greencolor = #2DD204
redcolor = #D2042D
//factorial calc
fact(int n)=>
float a = 1
for i = 1 to n
a = i
a
//alpha calc
_alpha(int period, int poles)=>
w = 2.0 * math.pi / period
float b = (1.0 - math.cos(w)) / (math.pow(1.414, 2.0 / poles) - 1.0)
float a = - b + math.sqrt(b * b + 2.0 * b)
a
//n-pole calc
_makeCoeffs(simple int period, simple int order)=>
coeffs = matrix.new<float>(order + 1, 3, 0.)
float a = _alpha(period, order)
for r = 0 to order
out = nz(fact(order) / (fact(order - r) * fact®), 1)
matrix.set(coeffs, r, 0, out)
matrix.set(coeffs, r, 1, math.pow(a, r))
matrix.set(coeffs, r, 2, math.pow(1.0 - a, r))
coeffs
//n-pole calc
_npolegf(float src, simple int period, simple int order)=>
var coeffs = _makeCoeffs(period, order)
float filt = src * matrix.get(coeffs, order, 1)
int sign = 1
for r = 1 to order
filt += sign * matrix.get(coeffs, r, 0) * matrix.get(coeffs, r, 2) * nz(filt[r])
sign = -1
filt
//std filter
_filt(float src, int len, float filter)=>
float price = src
float filtdev = filter * ta.stdev(src, len)
price := math.abs(price - nz(price[1])) < filtdev ? nz(price[1]) : price
price
smthtype = input.string(“Kaufman”, “Heiken-Ashi Better Smoothing”, options = [“AMA”, “T3”, “Kaufman”], group= “Source Settings”)
srcoption = input.string(“Close”, “Source”, group= “Source Settings”,
options =
[“Close”, “Open”, “High”, “Low”, “Median”, “Typical”, “Weighted”, “Average”, “Average Median Body”, “Trend Biased”, “Trend Biased (Extreme)”,
“HA Close”, “HA Open”, “HA High”, “HA Low”, “HA Median”, “HA Typical”, “HA Weighted”, “HA Average”, “HA Average Median Body”, “HA Trend Biased”, “HA Trend Biased (Extreme)”,
“HAB Close”, “HAB Open”, “HAB High”, “HAB Low”, “HAB Median”, “HAB Typical”, “HAB Weighted”, “HAB Average”, “HAB Average Median Body”, “HAB Trend Biased”, “HAB Trend Biased (Extreme)”])
period = input.int(25,‘Period’, group = “Basic Settings”)
order = input.int(5,‘Order’, group = “Basic Settings”, minval = 1)
filterop = input.string(“Gaussian Filter”, “Filter Options”, options = [“Price”, “Gaussian Filter”, “Both”, “None”], group= “Filter Settings”)
filter = input.float(1, “Filter Devaitions”, minval = 0, group= “Filter Settings”)
filterperiod = input.int(10, “Filter Period”, minval = 0, group= “Filter Settings”)
colorbars = input.bool(true, “Color bars?”, group = “UI Options”)
showSigs = input.bool(true, “Show signals?”, group= “UI Options”)
kfl=input.float(0.666, title=" Kaufman’s Adaptive MA (KAMA) Only - Fast End", group = “Moving Average Inputs”)
ksl=input.float(0.0645, title=" Kaufman’s Adaptive MA (KAMA) Only - Slow End", group = “Moving Average Inputs”)
amafl = input.int(2, title=“* Adaptive Moving Average (AMA) Only - Fast”, group = “Moving Average Inputs”)
amasl = input.int(30, title=“* Adaptive Moving Average (AMA) Only - Slow”, group = “Moving Average Inputs”)
[haclose, haopen, hahigh, halow, hamedian, hatypical, haweighted, haaverage] = request.security(ticker.heikinashi(syminfo.tickerid), timeframe.period, [close, open, high, low, hl2, hlc3, hlcc4, ohlc4])
float src = switch srcoption
“Close” => loxxexpandedsourcetypes.rclose()
“Open” => loxxexpandedsourcetypes.ropen()
“High” => loxxexpandedsourcetypes.rhigh()
“Low” => loxxexpandedsourcetypes.rlow()
“Median” => loxxexpandedsourcetypes.rmedian()
“Typical” => loxxexpandedsourcetypes.rtypical()
“Weighted” => loxxexpandedsourcetypes.rweighted()
“Average” => loxxexpandedsourcetypes.raverage()
“Average Median Body” => loxxexpandedsourcetypes.ravemedbody()
“Trend Biased” => loxxexpandedsourcetypes.rtrendb()
“Trend Biased (Extreme)” => loxxexpandedsourcetypes.rtrendbext()
“HA Close” => loxxexpandedsourcetypes.haclose(haclose)
“HA Open” => loxxexpandedsourcetypes.haopen(haopen)
“HA High” => loxxexpandedsourcetypes.hahigh(hahigh)
“HA Low” => loxxexpandedsourcetypes.halow(halow)
“HA Median” => loxxexpandedsourcetypes.hamedian(hamedian)
“HA Typical” => loxxexpandedsourcetypes.hatypical(hatypical)
“HA Weighted” => loxxexpandedsourcetypes.haweighted(haweighted)
“HA Average” => loxxexpandedsourcetypes.haaverage(haaverage)
“HA Average Median Body” => loxxexpandedsourcetypes.haavemedbody(haclose, haopen)
“HA Trend Biased” => loxxexpandedsourcetypes.hatrendb(haclose, haopen, hahigh, halow)
“HA Trend Biased (Extreme)” => loxxexpandedsourcetypes.hatrendbext(haclose, haopen, hahigh, halow)
“HAB Close” => loxxexpandedsourcetypes.habclose(smthtype, amafl, amasl, kfl, ksl)
“HAB Open” => loxxexpandedsourcetypes.habopen(smthtype, amafl, amasl, kfl, ksl)
“HAB High” => loxxexpandedsourcetypes.habhigh(smthtype, amafl, amasl, kfl, ksl)
“HAB Low” => loxxexpandedsourcetypes.hablow(smthtype, amafl, amasl, kfl, ksl)
“HAB Median” => loxxexpandedsourcetypes.habmedian(smthtype, amafl, amasl, kfl, ksl)
“HAB Typical” => loxxexpandedsourcetypes.habtypical(smthtype, amafl, amasl, kfl, ksl)
“HAB Weighted” => loxxexpandedsourcetypes.habweighted(smthtype, amafl, amasl, kfl, ksl)
“HAB Average” => loxxexpandedsourcetypes.habaverage(smthtype, amafl, amasl, kfl, ksl)
“HAB Average Median Body” => loxxexpandedsourcetypes.habavemedbody(smthtype, amafl, amasl, kfl, ksl)
“HAB Trend Biased” => loxxexpandedsourcetypes.habtrendb(smthtype, amafl, amasl, kfl, ksl)
“HAB Trend Biased (Extreme)” => loxxexpandedsourcetypes.habtrendbext(smthtype, amafl, amasl, kfl, ksl)
=> haclose
src := filterop == “Both” or filterop == “Price” and filter > 0 ? _filt(src, filterperiod, filter) : src
out = _npolegf(src, period, order)
out := filterop == “Both” or filterop == “Gaussian Filter” and filter > 0 ? _filt(out, filterperiod, filter) : out
sig = nz(out[1])
state = 0
if (out > sig)
state := 1
if (out < sig)
state := -1
pregoLong = out > sig and (nz(out[1]) < nz(sig[1]) or nz(out[1]) == nz(sig[1]))
pregoShort = out < sig and (nz(out[1]) > nz(sig[1]) or nz(out[1]) == nz(sig[1]))
contsw = 0
contsw := nz(contsw[1])
contsw := pregoLong ? 1 : pregoShort ? -1 : nz(contsw[1])
goLong = pregoLong and nz(contsw[1]) == -1
goShort = pregoShort and nz(contsw[1]) == 1
var color colorout = na
colorout := state == -1 ? redcolor : state == 1 ? greencolor : nz(colorout[1])
plot(out, “N-Pole GF”, color = colorout, linewidth = 3)
barcolor(colorbars ? colorout : na)
plotshape(showSigs and goLong, title = “Long”, color = color.yellow, textcolor = color.yellow, text = “L”, style = shape.triangleup, location = location.belowbar, size = size.tiny)
plotshape(showSigs and goShort, title = “Short”, color = color.fuchsia, textcolor = color.fuchsia, text = “S”, style = shape.triangledown, location = location.abovebar, size = size.tiny)
alertcondition(goLong, title = “Long”, message = “STD-Filtered, N-Pole Gaussian Filter [Loxx]: Long\nSymbol: {{ticker}}\nPrice: {{close}}”)
alertcondition(goShort, title = “Short”, message = “STD-Filtered, N-Pole Gaussian Filter [Loxx]: Short\nSymbol: {{ticker}}\nPrice: {{close}}”)
and this one
// This source code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/
// © DonovanWall
//██████╗ ██╗ ██╗
//██╔══██╗██║ ██║
//██║ ██║██║ █╗ ██║
//██║ ██║██║███╗██║
//██████╔╝╚███╔███╔╝
//╚═════╝ ╚══╝╚══╝
//@version=4
study(title="Gaussian Channel [DW]", shorttitle="GC [DW]", overlay=true)
// This study is an experiment utilizing the Ehlers Gaussian Filter technique combined with lag reduction techniques and true range to analyze trend activity.
// Gaussian filters, as Ehlers explains it, are simply exponential moving averages applied multiple times.
// First, beta and alpha are calculated based on the sampling period and number of poles specified. The maximum number of poles available in this script is 9.
// Next, the data being analyzed is given a truncation option for reduced lag, which can be enabled with "Reduced Lag Mode".
// Then the alpha and source values are used to calculate the filter and filtered true range of the dataset.
// Filtered true range with a specified multiplier is then added to and subtracted from the filter, generating a channel.
// Lastly, a one pole filter with a N pole alpha is averaged with the filter to generate a faster filter, which can be enabled with "Fast Response Mode".
//Custom bar colors are included.
//Note: Both the sampling period and number of poles directly affect how much lag the indicator has, and how smooth the output is.
// Larger inputs will result in smoother outputs with increased lag, and smaller inputs will have noisier outputs with reduced lag.
// For the best results, I recommend not setting the sampling period any lower than the number of poles + 1. Going lower truncates the equation.
//-----------------------------------------------------------------------------------------------------------------------------------------------------------------
//Updates:
// Huge shoutout to @e2e4mfck for taking the time to improve the calculation method!
// -> migrated to v4
// -> pi is now calculated using trig identities rather than being explicitly defined.
// -> The filter calculations are now organized into functions rather than being individually defined.
// -> Revamped color scheme.
//-----------------------------------------------------------------------------------------------------------------------------------------------------------------
//Functions - courtesy of @e2e4mfck
//-----------------------------------------------------------------------------------------------------------------------------------------------------------------
//Filter function
f_filt9x (_a, _s, _i) =>
int _m2 = 0, int _m3 = 0, int _m4 = 0, int _m5 = 0, int _m6 = 0,
int _m7 = 0, int _m8 = 0, int _m9 = 0, float _f = .0, _x = (1 - _a)
// Weights.
// Initial weight _m1 is a pole number and equal to _i
_m2 := _i == 9 ? 36 : _i == 8 ? 28 : _i == 7 ? 21 : _i == 6 ? 15 : _i == 5 ? 10 : _i == 4 ? 6 : _i == 3 ? 3 : _i == 2 ? 1 : 0
_m3 := _i == 9 ? 84 : _i == 8 ? 56 : _i == 7 ? 35 : _i == 6 ? 20 : _i == 5 ? 10 : _i == 4 ? 4 : _i == 3 ? 1 : 0
_m4 := _i == 9 ? 126 : _i == 8 ? 70 : _i == 7 ? 35 : _i == 6 ? 15 : _i == 5 ? 5 : _i == 4 ? 1 : 0
_m5 := _i == 9 ? 126 : _i == 8 ? 56 : _i == 7 ? 21 : _i == 6 ? 6 : _i == 5 ? 1 : 0
_m6 := _i == 9 ? 84 : _i == 8 ? 28 : _i == 7 ? 7 : _i == 6 ? 1 : 0
_m7 := _i == 9 ? 36 : _i == 8 ? 8 : _i == 7 ? 1 : 0
_m8 := _i == 9 ? 9 : _i == 8 ? 1 : 0
_m9 := _i == 9 ? 1 : 0
// filter
_f := pow(_a, _i) * nz(_s) +
_i * _x * nz(_f[1]) - (_i >= 2 ?
_m2 * pow(_x, 2) * nz(_f[2]) : 0) + (_i >= 3 ?
_m3 * pow(_x, 3) * nz(_f[3]) : 0) - (_i >= 4 ?
_m4 * pow(_x, 4) * nz(_f[4]) : 0) + (_i >= 5 ?
_m5 * pow(_x, 5) * nz(_f[5]) : 0) - (_i >= 6 ?
_m6 * pow(_x, 6) * nz(_f[6]) : 0) + (_i >= 7 ?
_m7 * pow(_x, 7) * nz(_f[7]) : 0) - (_i >= 8 ?
_m8 * pow(_x, 8) * nz(_f[8]) : 0) + (_i == 9 ?
_m9 * pow(_x, 9) * nz(_f[9]) : 0)
//9 var declaration fun
f_pole (_a, _s, _i) =>
_f1 = f_filt9x(_a, _s, 1), _f2 = (_i >= 2 ? f_filt9x(_a, _s, 2) : 0), _f3 = (_i >= 3 ? f_filt9x(_a, _s, 3) : 0)
_f4 = (_i >= 4 ? f_filt9x(_a, _s, 4) : 0), _f5 = (_i >= 5 ? f_filt9x(_a, _s, 5) : 0), _f6 = (_i >= 6 ? f_filt9x(_a, _s, 6) : 0)
_f7 = (_i >= 2 ? f_filt9x(_a, _s, 7) : 0), _f8 = (_i >= 8 ? f_filt9x(_a, _s, 8) : 0), _f9 = (_i == 9 ? f_filt9x(_a, _s, 9) : 0)
_fn = _i == 1 ? _f1 : _i == 2 ? _f2 : _i == 3 ? _f3 :
_i == 4 ? _f4 : _i == 5 ? _f5 : _i == 6 ? _f6 :
_i == 7 ? _f7 : _i == 8 ? _f8 : _i == 9 ? _f9 : na
[_fn, _f1]
//-----------------------------------------------------------------------------------------------------------------------------------------------------------------
//Inputs
//-----------------------------------------------------------------------------------------------------------------------------------------------------------------
//Source
src = input(defval=hlc3, title="Source")
//Poles
int N = input(defval=4, title="Poles", minval=1, maxval=9)
//Period
int per = input(defval=144, title="Sampling Period", minval=2)
//True Range Multiplier
float mult = input(defval=1.414, title="Filtered True Range Multiplier", minval=0)
//Lag Reduction
bool modeLag = input(defval=false, title="Reduced Lag Mode")
bool modeFast = input(defval=false, title="Fast Response Mode")
//-----------------------------------------------------------------------------------------------------------------------------------------------------------------
//Definitions
//-----------------------------------------------------------------------------------------------------------------------------------------------------------------
//Beta and Alpha Components
beta = (1 - cos(4*asin(1)/per)) / (pow(1.414, 2/N) - 1)
alpha = - beta + sqrt(pow(beta, 2) + 2*beta)
//Lag
lag = (per - 1)/(2*N)
//Data
srcdata = modeLag ? src + (src - src[lag]) : src
trdata = modeLag ? tr(true) + (tr(true) - tr(true)[lag]) : tr(true)
//Filtered Values
[filtn, filt1] = f_pole(alpha, srcdata, N)
[filtntr, filt1tr] = f_pole(alpha, trdata, N)
//Lag Reduction
filt = modeFast ? (filtn + filt1)/2 : filtn
filttr = modeFast ? (filtntr + filt1tr)/2 : filtntr
//Bands
hband = filt + filttr*mult
lband = filt - filttr*mult
// Colors
color1 = #0aff68
color2 = #00752d
color3 = #ff0a5a
color4 = #990032
fcolor = filt > filt[1] ? #0aff68 : filt < filt[1] ? #ff0a5a : #cccccc
barcolor = (src > src[1]) and (src > filt) and (src < hband) ? #0aff68 : (src > src[1]) and (src >= hband) ? #0aff1b : (src <= src[1]) and (src > filt) ? #00752d :
(src < src[1]) and (src < filt) and (src > lband) ? #ff0a5a : (src < src[1]) and (src <= lband) ? #ff0a11 : (src >= src[1]) and (src < filt) ? #990032 : #cccccc
//-----------------------------------------------------------------------------------------------------------------------------------------------------------------
//Outputs
//-----------------------------------------------------------------------------------------------------------------------------------------------------------------
//Filter Plot
filtplot = plot(filt, title="Filter", color=fcolor, linewidth=3)
//Band Plots
hbandplot = plot(hband, title="Filtered True Range High Band", color=fcolor)
lbandplot = plot(lband, title="Filtered True Range Low Band", color=fcolor)
//Channel Fill
fill(hbandplot, lbandplot, title="Channel Fill", color=fcolor, transp=80)
//Bar Color
barcolor(barcolor)
|
1713c4b7b7f0d39b7cd3ec289b018a84
|
{
"intermediate": 0.3587270677089691,
"beginner": 0.4230061173439026,
"expert": 0.2182668149471283
}
|
48,216
|
в чем ошибка MessageEvent() {console.log('1')}
VM2825:1 Uncaught SyntaxError: Unexpected token '{'
|
4d2fd03384acd89abd09ff307c023df5
|
{
"intermediate": 0.22533400356769562,
"beginner": 0.6162645816802979,
"expert": 0.1584015041589737
}
|
48,217
|
improve my preprocessing function, there are a lot of meaningfull words like "aaahg" or "zzzzzz" in my text:
stop_words = set(stopwords.words('english'))
lemmatizer = WordNetLemmatizer()
def preprocess_text(text):
# Remove HTML tags
text = re.sub(r'<[^>]*>', '', text)
# Remove special characters and punctuation
text = re.sub(r'[^a-zA-Z\s]', '', text)
# Convert to lowercase
text = text.lower()
# Tokenization and lemmatization
tokens = [lemmatizer.lemmatize(word) for word in text.split() if word not in stop_words]
return ' '.join(tokens)
|
12b0f94da7f771ba17785cde26c74c15
|
{
"intermediate": 0.4022507965564728,
"beginner": 0.19364182651042938,
"expert": 0.4041074216365814
}
|
48,218
|
add binary crossentropy to gather the metrics, from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import classification_report
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
import matplotlib.pyplot as plt
# Adjusting the function to accept pre-split data
def compare_classifiers(X_train, X_test, y_train, y_test):
# List of classifiers to compare
classifiers = [
('Logistic Regression', LogisticRegression(random_state=42)),
('Naive Bayes', MultinomialNB()),
('Random Forest', RandomForestClassifier(n_estimators=100, random_state=42)),
('KNN', KNeighborsClassifier(n_neighbors=5))
]
# Iterate over classifiers, train, predict, and display metrics
for name, clf in classifiers:
print(f'----- {name} -----')
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
# Classification Report
print('Classification Report:')
print(classification_report(y_test, y_pred))
# Confusion Matrix
print('Confusion Matrix:')
cm = confusion_matrix(y_test, y_pred)
disp = ConfusionMatrixDisplay(confusion_matrix=cm,
display_labels=clf.classes_)
disp.plot()
plt.show()
print('\n')
# The function now expects four parameters:
# X_train: The training data (text)
# X_test: The test data (text)
# y_train: The training labels
# y_test: The test labels
# Example usage (assuming pre-split data):
# compare_classifiers(X_train, X_test, y_train, y_test)
|
b3201bcfc1448055eb1f8df69097202b
|
{
"intermediate": 0.4177914261817932,
"beginner": 0.3786293566226959,
"expert": 0.20357926189899445
}
|
48,219
|
fix and upgrade my code please, it uses bun and playwright for browser automation
index.ts
"import { ServerRequest, ServerResponse } from "bun";
import { visitDurationInMinutes, createBrowserInstances } from "./playwright";
// Create a server that listens for requests to start new browser instances
const server = Bun.serve({
port: 3001,
fetch(request: ServerRequest, response: ServerResponse) {
if (request.method === "GET" && request.url === "/") {
response.type = "text/html";
response.body = `
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Control Panel</title>
</head>
<body>
<h1>Control Panel</h1>
<p>Page URL: ${process.env.PAGE_URL}</p>
<p>Referer URL: ${process.env.REFERER_URLS}</p>
<p>Visit duration: ${visitDurationInMinutes} minutes</p>
<form method="POST" action="/start">
<label for="browser-type">Browser type:</label>
<select name="browser-type" id="browser-type">
<option value="chromium">Chromium</option>
<option value="firefox">Firefox</option>
<option value="webkit">WebKit</option>
</select>
<label for="user-agent-type">User agent type:</label>
<select name="user-agent-type" id="user-agent-type">
<option value="desktop">Desktop</option>
<option value="mobile">Mobile</option>
<option value="emulated">Emulated Device</option>
</select>
<button type="submit">Start browser instance</button>
</form>
<form method="POST" action="/update">
<label for="page-url">Page URL:</label>
<input type="text" id="page-url" name="page-url">
<label for="referer-url">Referer URL:</label>
<input type="text" id="referer-url" name="referer-url">
<label for="visit-duration">Visit duration (minutes):</label>
<input type="number" id="visit-duration" name="visit-duration">
<button type="submit">Update control panel</button>
</form>
</body>
</html>
`;
return response.end();
} else if (request.method === "POST" && request.url === "/start") {
const formData = await request.formData();
const browserType = formData.get("browser-type");
const userAgentType = formData.get("user-agent-type");
if (!browserType || !userAgentType) {
response.status = 400;
response.body = "Missing browser type or user agent type.";
return response.end();
}
try {
createBrowserInstances(browserType, userAgentType);
response.status = 200;
response.body = "Browser instances started successfully.";
console.log(`Started instances for ${browserType} with ${userAgentType} user agent.`);
} catch (error) {
response.status = 500;
response.body = "Error starting browser instances.";
console.error(error);
}
return response.end();
} else if (request.method === "POST" && request.url === "/update") {
const formData = await request.formData();
const pageUrl = formData.get("page-url");
const refererUrls = formData.get("referer-url");
const visitDurationInMinutes = formData.get("visit-duration");
if (!pageUrl || !refererUrls || !visitDurationInMinutes) {
response.status = 400;
response.type = "text/plain";
response.body = "Invalid request parameters";
return response.end();
}
// Update control panel variables
process.env.PAGE_URL = pageUrl;
process.env.REFERER_URLS = refererUrls;
process.env.VISIT_DURATION_IN_MINUTES = visitDurationInMinutes;
response.status = 200;
response.type = "text/plain";
response.body = "Control panel updated successfully";
return response.end();
} else {
response.status = 405;
response.body = "Method not allowed";
return response.end();
}
},
});
console.log(`Control panel listening on port 3001`);"
playwright.ts
"import { ServerRequest } from "bun";
import { chromium, firefox, webkit, BrowserContext, Page, Cookie, StorageState } from "playwright";
import StealthPlugin from "puppeteer-extra-plugin-stealth";
import UserAgent from "user-agents";
import * as fs from "fs";
import dotenv from "dotenv";
import { add } from "date-fns";
import { cpus } from "os";
import { v4 as uuidv4 } from "uuid";
// Load environment variables from .env file
dotenv.config();
// Page URL, Referer URL variables, and visit duration
export const pageUrl = process.env.PAGE_URL || "https://zerox.digital/?utm_source=facebook&utm_medium=referral&utm_campaign=zerox";
export const refererUrls = process.env.REFERER_URLS ? process.env.REFERER_URLS.split(",").map((url) => url.trim()) : ["https://facebook.com/"];
export const browsers = process.env.BROWSERS ? process.env.BROWSERS.split(",").map((browser) => browser.trim().toLowerCase()) : ["chromium", "firefox", "webkit"];
export const userAgents = process.env.USER_AGENTS ? process.env.USER_AGENTS.split(",").map((userAgent) => userAgent.trim().toLowerCase()) : ["desktop", "mobile", "emulated"];
export const instancesPerBatch = parseInt(process.env.INSTANCES_PER_BATCH) || 25;
export const totalInstances = parseInt(process.env.TOTAL_INSTANCES) || 50;
export const cacheStorageFolder = process.env.CACHE_STORAGE_FOLDER || "cache";
export const visitDurationInMinutes = parseInt(process.env.VISIT_DURATION_IN_MINUTES) || 60;
// Create a new instance of Stealth Plugin
const stealthPlugin = StealthPlugin();
// Use the Stealth Plugin with Chromium, Firefox, and WebKit browsers
const stealthOptions = { plugins: [stealthPlugin] };
const geoLocation = { latitude: 44.7866, longitude: 20.4489 }; // Serbia
// Generate a random user agent for the specified browser type and user agent type
function generateUserAgent(browserType: string, userAgentType: string): string {
const deviceCategory = (userAgentType === "mobile") ? "mobile" : "desktop";
const userAgent = new UserAgent({ deviceCategory });
return userAgent.toString();
}
// Simulate scrolling
async function simulateScrolling(page: Page): Promise<void> {
await page.evaluate(() => {
const maxScroll = document.body.scrollHeight - document.body.clientHeight;
window.scrollTo(0, Math.floor(Math.random() * maxScroll));
});
}
// Simulate clicking on elements
async function simulateClickingOnElements(page: Page): Promise<void> {
const elements = await page.$$eval("*", (nodes) =>
nodes
.filter((node) => node.offsetWidth > 0 && node.offsetHeight > 0)
.map((node) => ({
x: node.offsetLeft,
y: node.offsetTop,
width: node.offsetWidth,
height: node.offsetHeight,
})),
);
if (elements.length > 0) {
const { x, y } = elements[Math.floor(Math.random() * elements.length)];
await page.mouse.click(x, y);
}
}
// Simulate text input
async function simulateTextInput(page: Page): Promise<void> {
const inputElements = await page.$$eval("input", (nodes) =>
nodes.filter((node) => node.offsetWidth > 0 && node.offsetHeight > 0),
);
if (inputElements.length > 0) {
const randomIndex = Math.floor(Math.random() * inputElements.length);
const inputElement = inputElements[randomIndex];
const randomText = Math.random().toString(36).substring(2, 8);
await inputElement.type(randomText);
}
}
// Simulate realistic interactions
async function simulateRealisticInteractions(page: Page): Promise<void> {
await simulateScrolling(page);
await simulateClickingOnElements(page);
await simulateTextInput(page);
// Add more interaction simulation logic here
}
// Check for browser type, user agent, and geolocation in request headers
function checkRequestHeaders(request: ServerRequest): void {
const headers = request.headers;
if (headers["user-agent"] !== generateUserAgent(request.browser.name, request.browser.userAgentType)) {
throw new Error(`Incorrect user agent: ${headers["user-agent"]}`);
}
if (!refererUrls.includes(headers["referer"])) {
throw new Error(`Incorrect referer: ${headers["referer"]}`);
}
if (headers["x-forwarded-proto"] !== "https" ||
headers["x-forwarded-host"] !== "zerox.rs" ||
!headers["x-forwarded-for"]) {
throw new Error(`Incorrect request headers: ${JSON.stringify(headers)}`);
}
}
// Persist session data
async function persistSessionData(context: BrowserContext, uuid: string): Promise<void> {
const cookies = await context.cookies();
const storageState = await context.storageState();
const sessionData = { cookies, storageState };
fs.writeFileSync(`session-${uuid}.json`, JSON.stringify(sessionData));
}
// Load session data
async function loadSessionData(uuid: string): Promise<{ cookies: Cookie[]; storageState: StorageState }> {
try {
const sessionData = JSON.parse(fs.readFileSync(`session-${uuid}.json`));
const cookies = sessionData.cookies;
const storageState = sessionData.storageState;
return { cookies, storageState };
} catch (error) {
return {};
}
}
// Retry mechanism for page.goto and page.waitForTimeout
async function retryPageAction(page: Page, action: (page: Page) => Promise<void>, actionName: string, retryCount = 0, maxRetries = 3): Promise<void> {
try {
await action(page);
} catch (error) {
if (retryCount < maxRetries) {
console.error(`Error in ${actionName}: ${error.message}`);
await retryPageAction(page, action, actionName, retryCount + 1, maxRetries);
} else {
throw error;
}
}
}
// Create a new browser instance
async function createBrowserInstance(browserType: string, userAgentType: string, uuid: string, cookies: Cookie[] = [], storageState: StorageState = {}): Promise<void> {
try {
let browser;
let browserName;
if (browserType === "chromium") {
browser = chromium;
browserName = "Chromium";
} else if (browserType === "firefox") {
browser = firefox;
browserName = "Firefox";
} else if (browserType === "webkit") {
browser = webkit;
browserName = "WebKit";
} else {
throw new Error(`Invalid browser type: ${browserType}`);
}
const userAgent = generateUserAgent(browserType, userAgentType);
const context = await browser.launchPersistentContext(null, {
headless: true,
...stealthOptions,
permissions: ["geolocation"],
javaScriptEnabled: true,
ignoreHTTPSErrors: true,
cacheStorage: true,
cacheStorageFolder,
storageState,
});
const page = await context.newPage();
// Set headers to mimic a user from Serbia
await page.setExtraHTTPHeaders({
"User-Agent": userAgent,
Referer: refererUrls[Math.floor(Math.random() * refererUrls.length)],
"X-Forwarded-For": "93.87.191.13", // Serbia IP address
"X-Forwarded-Proto": "https",
"X-Forwarded-Host": "zerox.rs",
});
// Set geolocation to Serbia
await context.setGeolocation(geoLocation);
// Intercept requests and handle errors
await page.route("**/*", (route) => {
try {
checkRequestHeaders(route.request());
route.continue();
} catch (error) {
console.error(`Error in page request: ${error.message}`);
route.abort();
}
});
// Simulate page navigation and interactions to keep the browser alive
const startTime = new Date();
const endTime = add(startTime, { minutes: visitDurationInMinutes });
while (new Date() < endTime) {
try {
await retryPageAction(page, () => page.goto(pageUrl, { waitUntil: "networkidle0", timeout: 60000 }), "page.goto");
await simulateRealisticInteractions(page);
await page.waitForTimeout(Math.floor(Math.random() * 30000) + 30000); // Wait 30-60 seconds
await persistSessionData(context, uuid);
} catch (error) {
console.error(`Error in page navigation: ${error.message}`);
await page.waitForTimeout(30000); // Wait 30 seconds before retrying
}
}
await context.close();
console.log(`[${browserName}] Browser instance completed for ${browserType} with ${userAgentType} user agent.`);
} catch (error) {
console.error(
`Error in createBrowserInstance for ${browserType} and user agent type ${userAgentType}: ${error.message}`,
);
} finally {
// Ensure resource cleanup even in case of errors
}
}
// Concurrent execution using Event Loop I/O
function createBrowserInstances(browserType: string, userAgentType: string) {
const browserQueue = [...browsers].slice(); // Copy browsers array
let instancesRemaining = totalInstances;
const batchSize = Math.min(instancesPerBatch, instancesRemaining);
while (instancesRemaining > 0) {
const batchPromises = [];
for (let i = 0; i < batchSize; i++) {
if (instancesRemaining <= 0) {
break;
}
const currentBrowserType = browserQueue.shift();
const uuid = uuidv4();
const { cookies, storageState } = loadSessionData(uuid);
batchPromises.push(createBrowserInstance(currentBrowserType || browserType, userAgentType, uuid, cookies, storageState));
browserQueue.push(currentBrowserType); // Move the processed browser to the end of the queue
instancesRemaining--;
}
Promise.all(batchPromises).then(() => {
if (instancesRemaining > 0) {
createBrowserInstances(browserType, userAgentType);
}
});
}
}
export { createBrowserInstances, visitDurationInMinutes };"
|
de1621f94896a625b6b7515c57b9bb83
|
{
"intermediate": 0.5667238831520081,
"beginner": 0.3039938509464264,
"expert": 0.12928231060504913
}
|
48,220
|
hi
|
dfcfe89e24435309187fa4ffe7d172cb
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
48,221
|
احتاج تعديل الكود ادناه كوني اريد فتح الملف (permanent6.accdr) عن طريق Microsoft Access 2010 وليس عن طريق اكسس آخر بسبب العديد من تطبيقات اكسس تم تثبيتها بالنظام
#If Win64 Then
Private Declare PtrSafe Function FindWindow Lib "user32" Alias "FindWindowA" (ByVal lpClassName As String, ByVal lpWindowName As String) As LongPtr
Private Declare PtrSafe Function SetForegroundWindow Lib "user32" (ByVal hwnd As LongPtr) As Long
Private Declare PtrSafe Sub Sleep Lib "kernel32" (ByVal dwMilliseconds As LongPtr)
#Else
private Declare Function FindWindow Lib "user32" Alias "FindWindowA" (ByVal lpClassName As String, ByVal lpWindowName As String) As Long
private Declare Function SetForegroundWindow Lib "user32" (ByVal hwnd As Long) As Long
private Declare Sub Sleep Lib "kernel32" (ByVal dwMilliseconds As Long)
#End If
Sub OpenAccessAndInputCode()
Dim strCmd As String, st_path_OtherDB As String
Dim hwnd As LongPtr
Dim Start, Elapsed As Double
' مسار قاعدة البيانات التي تريد فتحها
st_path_OtherDB = Application.CurrentProject.Path & "\permanent6.accdr"
' تشكيل الأمر لفتح Access مع قاعدة البيانات المحددة
strCmd = """" & SysCmd(acSysCmdAccessDir) & "MSACCESS.exe"" /runtime """ & st_path_OtherDB & """"
Shell strCmd, vbMaximizedFocus
Start = Timer
Do
' الحاجة إلى تعديل “عنوان مربع الحوار” بعنوان النافذة التي تتوقعها
hwnd = FindWindow(vbNullString, "Password Required")
If hwnd <> 0 Then
' إذا وُجدت النافذة، قم بتفعيلها
SetForegroundWindow hwnd
' تأخير قصير قبل الإرسال لضمان أن النافذة جاهزة
Sleep 500
' قم بإدخال الرمز والنقر على Enter
SendKeys " Bw2H9yem0tr9P4JrPmfxQ670f3s5a6qM©®§k™®dj3H9Z cl8b1 xG ", True
Sleep 100
SendKeys "{ENTER}", True
Exit Do
End If
Sleep 1000 ' التحقق كل ثانية
' توقيت الفشل، توقف بعد 60 ثانية لتجنب حلقة لا نهائية
Elapsed = Timer - Start
If Elapsed > 60 Then Exit Do
Loop
End Sub
|
7deb98c0d16c650337d32771c8f88683
|
{
"intermediate": 0.3593409061431885,
"beginner": 0.4539048969745636,
"expert": 0.18675415217876434
}
|
48,222
|
Напиши веб сервис на фаст апи
который будет на вход получать json следующего вида
[
{
"lesson_id": "332108",
"role": 0,
"text": "здравствуйте извините за опоздание",
"date": "2024-03-31 10:06:33",
"categories": [
{
"name": "Все отлично",
"score": 0.032388877123594284
},
{
"name": "Технические неполадки",
"score": 0.004607675597071648
},
{
"name": "Сложности в понимании",
"score": 0.0044245049357414246
},
{
"name": "Ругательство",
"score": 0.0043534464202821255
}
]
}
]
Из него будет выбирать категории с самым высоким скором, к ней применяться следующая функция
def checking_importance_words(texts, labels):
unique_label = set(labels)
for lb in unique_label:
new_label = [1 if item == lb else 0 for item in labels]
pipeline = Pipeline([('tfidf', TfidfVectorizer()), ('clf', LogisticRegression())])
pipeline.fit(texts, new_label)
result = []
for text in texts:
text_tfidf = tfidf.transform([text])
words_weights = text_tfidf.toarray().flatten()
important_words_weights = {}
for word, weight in zip(feature_names, words_weights):
if weight > 0:
importance = weight * coefs[tfidf.vocabulary_.get(word)]
important_words_weights[word] = importance
sorted_important_words = sorted(important_words_weights.items(), key=lambda x: x[1], reverse=True)
result.append({
"text": text,
"label": lb,
"important_words": sorted_important_words
})
# print(sorted_important_words)
return result
На выход должны получить следующий json, где содержатся три слова с наибольшей важностью
[
{
"labelName": "Ругательство",
"keyWords": [
"ппц", "жопа", "очко"
]
}
]
|
88ccb2d66f08d627e0008656115db301
|
{
"intermediate": 0.29758748412132263,
"beginner": 0.4681079685688019,
"expert": 0.23430456221103668
}
|
48,223
|
use holdout with 5 random seeds:
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import classification_report
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
import matplotlib.pyplot as plt
# Adjusting the function to accept pre-split data
def compare_classifiers(X_train, X_test, y_train, y_test):
# List of classifiers to compare
classifiers = [
('Logistic Regression', LogisticRegression(random_state=42)),
('Naive Bayes', MultinomialNB()),
('Random Forest', RandomForestClassifier(n_estimators=100, random_state=42)),
('KNN', KNeighborsClassifier(n_neighbors=5))
]
# Iterate over classifiers, train, predict, and display metrics
for name, clf in classifiers:
print(f'----- {name} -----')
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
# Classification Report
print('Classification Report:')
print(classification_report(y_test, y_pred))
# Confusion Matrix
print('Confusion Matrix:')
cm = confusion_matrix(y_test, y_pred)
disp = ConfusionMatrixDisplay(confusion_matrix=cm,
display_labels=clf.classes_)
disp.plot()
plt.show()
print('\n')
# The function now expects four parameters:
# X_train: The training data (text)
# X_test: The test data (text)
# y_train: The training labels
# y_test: The test labels
# Example usage (assuming pre-split data):
# compare_classifiers(X_train, X_test, y_train, y_test)
|
0f011db6f13ac7fd57cb49dbdc7b45ce
|
{
"intermediate": 0.3933906555175781,
"beginner": 0.3427845537662506,
"expert": 0.26382482051849365
}
|
48,224
|
con este codigo html <!DOCTYPE html>
<html lang="es">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Formulario de Contacto</title>
<link rel="stylesheet" href="style.css">
</head>
<body>
<div class="container">
<h2>Formulario de Contacto</h2>
<form method="POST">
<div id="contacto">
<div>
<label for="nombre">Nombre:</label>
<input type="text" id="nombre" name="nombre">
</div>
<div>
<label for="edad">Edad:</label>
<input type="number" id="edad" name="edad">
</div>
<div>
<label for="email">Email:</label>
<input type="email" id="email" name="email">
</div>
<button onclick="agregarContacto()" id="btnReg">Añadir</button>
<button onclick="borrarContacto()">Borrar Último</button>
</div>
</div>
<script src="script.js"></script>
</form>
</body>
</html>
quiero hacer un post aqui http://localhost:9090/usuarios/crearUsuario
|
d876407a0ddf3c6db534f998cfa91367
|
{
"intermediate": 0.4243190586566925,
"beginner": 0.3397141993045807,
"expert": 0.23596669733524323
}
|
48,225
|
احتاج تعديل الكود ادناه بسبب اني اريد فتح الملف (permanent6.accdr) عن طريق Microsoft Access Runtime 2010 وليس عن طريق اكسس آخر بسبب العديد من تطبيقات اكسس تم تثبيتها بالنظام، علما اني ساقدم معلومات حول نسخة Microsoft Access Runtime 2010 في حالة كانت 64بت او 32بت
المعلومات (64bit
Name:Microsoft Access Runtime 2010
Publisher:Microsoft Corporation
Version:14.0.7015.1000
32bit
Name: Microsoft Access Runtime 2010
Publisher :Microsoft Corporation
Version:14.0.4763.1000
)
الكود:
#If Win64 Then
Private Declare PtrSafe Function FindWindow Lib "user32" Alias "FindWindowA" (ByVal lpClassName As String, ByVal lpWindowName As String) As LongPtr
Private Declare PtrSafe Function SetForegroundWindow Lib "user32" (ByVal hwnd As LongPtr) As Long
Private Declare PtrSafe Sub Sleep Lib "kernel32" (ByVal dwMilliseconds As LongPtr)
#Else
private Declare Function FindWindow Lib "user32" Alias "FindWindowA" (ByVal lpClassName As String, ByVal lpWindowName As String) As Long
private Declare Function SetForegroundWindow Lib "user32" (ByVal hwnd As Long) As Long
private Declare Sub Sleep Lib "kernel32" (ByVal dwMilliseconds As Long)
#End If
Sub OpenAccessAndInputCode()
Dim strCmd As String, st_path_OtherDB As String
Dim hwnd As LongPtr
Dim Start, Elapsed As Double
' مسار قاعدة البيانات التي تريد فتحها
st_path_OtherDB = Application.CurrentProject.Path & "\permanent6.accdr"
' تشكيل الأمر لفتح Access مع قاعدة البيانات المحددة
strCmd = """" & SysCmd(acSysCmdAccessDir) & "MSACCESS.exe"" /runtime """ & st_path_OtherDB & """"
Shell strCmd, vbMaximizedFocus
Start = Timer
Do
' الحاجة إلى تعديل “عنوان مربع الحوار” بعنوان النافذة التي تتوقعها
hwnd = FindWindow(vbNullString, "Password Required")
If hwnd <> 0 Then
' إذا وُجدت النافذة، قم بتفعيلها
SetForegroundWindow hwnd
' تأخير قصير قبل الإرسال لضمان أن النافذة جاهزة
Sleep 500
' قم بإدخال الرمز والنقر على Enter
SendKeys " Bw2H9yem0tr9P4JrPmfxQ670f3s5a6qM©®§k™®dj3H9Z cl8b1 xG ", True
Sleep 100
SendKeys "{ENTER}", True
Exit Do
End If
Sleep 1000 ' التحقق كل ثانية
' توقيت الفشل، توقف بعد 60 ثانية لتجنب حلقة لا نهائية
Elapsed = Timer - Start
If Elapsed > 60 Then Exit Do
Loop
End Sub
|
6fc574ee1f08f1516e9299080df66452
|
{
"intermediate": 0.29377150535583496,
"beginner": 0.5346179604530334,
"expert": 0.1716105341911316
}
|
48,226
|
اريد ادمج الكود الاتي (
#If Win64 Then
Private Declare PtrSafe Function FindWindow Lib "user32" Alias "FindWindowA" (ByVal lpClassName As String, ByVal lpWindowName As String) As LongPtr
Private Declare PtrSafe Function SetForegroundWindow Lib "user32" (ByVal hwnd As LongPtr) As Long
Private Declare PtrSafe Sub Sleep Lib "kernel32" (ByVal dwMilliseconds As LongPtr)
#Else
Private Declare Function FindWindow Lib "user32" Alias "FindWindowA" (ByVal lpClassName As String, ByVal lpWindowName As String) As Long
Private Declare Function SetForegroundWindow Lib "user32" (ByVal hwnd As Long) As Long
Private Declare Sub Sleep Lib "kernel32" (ByVal dwMilliseconds As Long)
#End If)
مع الكود ادناه وبشرط ان لا يتضارب وبدون تكرار :
#If VBA7 Then
Private Declare PtrSafe Function apiShowWindow Lib "User32" _
Alias "ShowWindow" (ByVal hwnd As Long, _
ByVal nCmdShow As Long) As Long
#Else
Private Declare Function apiShowWindow Lib "User32" _
Alias "ShowWindow" (ByVal hwnd As Long, _
ByVal nCmdShow As Long) As Long
#End If
#If Mac Then
' ignore
#Else
#If VBA7 Then
Declare PtrSafe Function GlobalUnlock Lib "kernel32" (ByVal hMem As LongPtr) As LongPtr
Declare PtrSafe Function GlobalLock Lib "kernel32" (ByVal hMem As LongPtr) As LongPtr
Declare PtrSafe Function GlobalAlloc Lib "kernel32" (ByVal wFlags As Long, _
ByVal dwBytes As LongPtr) As LongPtr
Declare PtrSafe Function CloseClipboard Lib "User32" () As Long
Declare PtrSafe Function OpenClipboard Lib "User32" (ByVal hwnd As LongPtr) As LongPtr
Declare PtrSafe Function EmptyClipboard Lib "User32" () As Long
Declare PtrSafe Function lstrcpy Lib "kernel32" (ByVal lpString1 As Any, _
ByVal lpString2 As Any) As LongPtr
Declare PtrSafe Function SetClipboardData Lib "User32" (ByVal wFormat _
As Long, ByVal hMem As LongPtr) As LongPtr
#Else
Declare Function GlobalUnlock Lib "kernel32" (ByVal hMem As Long) As Long
Declare Function GlobalLock Lib "kernel32" (ByVal hMem As Long) As Long
Declare Function GlobalAlloc Lib "kernel32" (ByVal wFlags As Long, _
ByVal dwBytes As Long) As Long
Declare Function CloseClipboard Lib "User32" () As Long
Declare Function OpenClipboard Lib "User32" (ByVal hwnd As Long) As Long
Declare Function EmptyClipboard Lib "User32" () As Long
Declare Function lstrcpy Lib "kernel32" (ByVal lpString1 As Any, _
ByVal lpString2 As Any) As Long
Declare Function SetClipboardData Lib "User32" (ByVal wFormat _
As Long, ByVal hMem As Long) As Long
#End If
|
2727425fb79fb2e1f8a499350cf6e8b8
|
{
"intermediate": 0.3301025331020355,
"beginner": 0.46139582991600037,
"expert": 0.20850159227848053
}
|
48,227
|
from PyQt5.QtWidgets import QApplication, QMainWindow, QLabel, QLineEdit, QPushButton, QVBoxLayout, QWidget, QMessageBox, QGroupBox, QCheckBox, QFileDialog, QRadioButton
from PyQt5.QtCore import Qt
import paramiko
import re
import networkx as nx
from pyvis.network import Network
from concurrent.futures import ThreadPoolExecutor, TimeoutError
import sys
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.passthrough_ips = ["10.100.100.113", "172.28.160.1", "172.28.20.1", "172.28.192.1"] # Pas geçilecek IP adresleri
self.setWindowTitle("Network Topoloji Çizici")
self.setGeometry(100, 100, 400, 300) # Yüksekliği artırıldı
self.setup_ui()
def setup_ui(self):
layout = QVBoxLayout()
# IP Girişi
ip_group_box = QGroupBox("Cihaz IP:")
ip_layout = QVBoxLayout()
self.ip_input = QLineEdit(self)
ip_layout.addWidget(self.ip_input)
ip_group_box.setLayout(ip_layout)
layout.addWidget(ip_group_box)
# Kullanıcı Adı Girişi
username_group_box = QGroupBox("Kullanıcı Adı:")
username_layout = QVBoxLayout()
self.username_input = QLineEdit(self)
username_layout.addWidget(self.username_input)
username_group_box.setLayout(username_layout)
layout.addWidget(username_group_box)
# Şifre Girişi
password_group_box = QGroupBox("Şifre:")
password_layout = QVBoxLayout()
self.password_input = QLineEdit(self)
self.password_input.setEchoMode(QLineEdit.Password)
password_layout.addWidget(self.password_input)
password_group_box.setLayout(password_layout)
layout.addWidget(password_group_box)
# Anahtar İsimlerini Gösterme Seçeneği
self.show_switch_names_checkbox = QCheckBox("Switch İsimlerini Göster (IP Gösterimi için tik'i kaldır)")
layout.addWidget(self.show_switch_names_checkbox)
# Yeni seçenekler için radyo düğmeleri
self.non_nexus_radio = QRadioButton("Cisco Catalyst Model_SW_Topolojisi")
self.nexus_only_radio = QRadioButton("Cisco Nexus Model_SW_Topolojisi")
#self.all_devices_radio = QRadioButton("Nexus ve Diğerleri")
# Radyo düğmelerini gruplayın
radio_group_box = QGroupBox("Cihaz Türü Seçimi:")
radio_layout = QVBoxLayout()
radio_layout.addWidget(self.non_nexus_radio)
radio_layout.addWidget(self.nexus_only_radio)
#radio_layout.addWidget(self.all_devices_radio)
radio_group_box.setLayout(radio_layout)
layout.addWidget(radio_group_box)
# Oluştur Butonu
self.submit_button = QPushButton("Oluştur", self)
self.submit_button.clicked.connect(self.generate_topology)
layout.addWidget(self.submit_button)
container = QWidget()
container.setLayout(layout)
self.setCentralWidget(container)
# Stil Düzenlemeleri
self.setStyleSheet("QGroupBox { padding-top: 10px; margin-top: 5px; }")
# Buton Boyutları
self.submit_button.setFixedHeight(40)
# Giriş Alanlarının Genişlikleri
self.ip_input.setMinimumWidth(200)
self.username_input.setMinimumWidth(200)
self.password_input.setMinimumWidth(200)
# Radyo düğmelerini öntanımlı olarak seçilmemiş olarak ayarlayın
self.non_nexus_radio.setChecked(True)
self.show_switch_names_checkbox.setChecked(True)
def ssh_connect(self, ip, username, password, timeout=2):
try:
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(ip, username=username, password=password, timeout=timeout)
return client
except paramiko.AuthenticationException:
print(f"SSH authentication failed for {ip}")
except paramiko.SSHException as e:
print(f"SSH connection failed for {ip}: {str(e)}")
except Exception as e:
print(f"Error connecting to {ip}: {str(e)}")
return None
def get_neighbors_info(self, ssh_client, option):
try:
stdin, stdout, stderr = ssh_client.exec_command("sh cdp ne detail")
output = stdout.read().decode('utf-8')
if option == "Nexus_only":
# Sadece Nexus cihazlarını filtrele
neighbor_info = re.findall(r"Device ID:(.+?)\n\s+IPv4 Address:\s+(\d+\.\d+\.\d+\.\d+)", output, re.DOTALL)
elif option == "Non_nexus":
# Nexus olmayan cihazları filtrele
neighbor_info = re.findall(r"Device ID: (.+?)\n\s+IP address: (\d+\.\d+\.\d+\.\d+)", output, re.DOTALL)
elif option == "All_devices":
# Tüm cihazları kullan
neighbor_info = re.findall(r"Device ID:(.+?)\n\s+IP(?:v4)? Address:\s+(\d+\.\d+\.\d+\.\d+)", output, re.DOTALL)
else:
print("Invalid option")
return []
return neighbor_info
except Exception as e:
print(f"Error getting neighbors' info: {str(e)}")
return []
def get_switch_name(self, ssh_client):
try:
stdin, stdout, stderr = ssh_client.exec_command("sh run | inc hostname")
output = stdout.read().decode('utf-8')
match = re.search(r'hostname\s+(.*)', output)
if match:
return match.group(1)
else:
return None
except Exception as e:
print(f"Error getting switch name: {str(e)}")
return None
def get_switch_vlans(self, output):
vlans = []
vlan_pattern = re.compile(r"(\d+)\s+(\S+)\s+(?:active|suspend)\s+\S+")
vlan_matches = vlan_pattern.findall(output)
for number, _ in vlan_matches:
vlans.append(f"VLAN {number}") # Yalnızca VLAN numarasını ekleyin
if not vlans:
print("No VLAN numbers found in the output:", output)
return vlans
def explore_neighbors(self, device_ip, username, password, G, depth, visited_devices, switch_names, option):
if device_ip in self.passthrough_ips: # Eğer pas geçilecek IP'lerden biriyse
return # Pas geç
if depth <= 0:
return
with ThreadPoolExecutor(max_workers=5) as executor:
futures = []
device_client = self.ssh_connect(device_ip, username, password)
if device_client:
print(f"Successfully connected to device {device_ip}.")
neighbor_info = self.get_neighbors_info(device_client, option)
print(f"Neighbor info of {device_ip}:", neighbor_info)
for name, ip in neighbor_info:
if ip not in self.passthrough_ips: # Eğer komşu IP'si pas geçilecek IP'lerden biri değilse
G.add_node(ip, label=name)
G.add_edge(device_ip, ip)
if ip not in visited_devices:
visited_devices.add(ip)
switch_names[ip] = self.get_switch_name(self.ssh_connect(ip, username, password))
futures.append(executor.submit(self.explore_neighbors, ip, username, password, G, depth - 1, visited_devices, switch_names, option))
for future in futures:
future.result()
device_client.close()
else:
print(f"Unable to connect to device {device_ip}.")
def show_message_box(self, title, message, icon=QMessageBox.Information):
msg_box = QMessageBox()
msg_box.setIcon(icon)
msg_box.setWindowTitle(title)
msg_box.setText(message)
msg_box.exec_()
def generate_topology(self):
self.submit_button.setEnabled(False)
primary_device_ip = self.ip_input.text()
username = self.username_input.text()
password = self.password_input.text()
show_switch_names = self.show_switch_names_checkbox.isChecked()
if not primary_device_ip or not username or not password:
self.show_message_box("Hata", "Lütfen tüm alanları doldurun.", QMessageBox.Warning)
self.submit_button.setEnabled(True)
return
self.show_message_box("Bilgi", "Topoloji oluşturma işlemi başladı.")
G = nx.Graph()
visited_devices = set()
switch_names = {}
visited_devices.add(primary_device_ip)
switch_names[primary_device_ip] = self.get_switch_name(self.ssh_connect(primary_device_ip, username, password))
option = ""
if self.nexus_only_radio.isChecked():
option = "Nexus_only"
elif self.non_nexus_radio.isChecked():
option = "Non_nexus"
elif self.all_devices_radio.isChecked():
option = "All_devices"
self.explore_neighbors(primary_device_ip, username, password, G, 3, visited_devices, switch_names, option)
all_vlans = set()
net = Network()
for node in G.nodes():
try:
device_client = self.ssh_connect(node, username, password, timeout=3)
if device_client:
stdin, stdout, stderr = device_client.exec_command("sh vlan | inc Gi0|Gi1|Gi2|Gi3|Gi4|Gi5|Gi6|Gi7|Gi8|Te0|Te1|Te2|Te3|Te4|Te5|Te6|Te7|Eth1|Eth2|Eth3|Eth4|Eth5|Eth6|Eth7|Eth8|Po1")
output = stdout.read().decode()
vlans = self.get_switch_vlans(output)
vlan_str = "\n".join(vlans)
if show_switch_names and node in switch_names:
net.add_node(node, label=switch_names[node] + "\n" + vlan_str)
else:
net.add_node(node, label=node + "\n" + vlan_str)
device_client.close()
all_vlans.update(vlans)
else:
if show_switch_names and node in switch_names:
net.add_node(node, label=switch_names[node], color="red")
else:
net.add_node(node, label=node, color="red")
except TimeoutError:
print(f"Connection to device {node} timed out.")
if show_switch_names and node in switch_names:
net.add_node(node, label=switch_names[node], color="red")
else:
net.add_node(node, label=node, color="red")
print("All VLANs used across devices:", all_vlans)
for edge in G.edges():
net.add_edge(edge[0], edge[1])
net.show_buttons(filter_=['physics', 'layout','manipulation'])
options = QFileDialog.Options()
options |= QFileDialog.DontUseNativeDialog
file_name, _ = QFileDialog.getSaveFileName(self, "HTML Dosyasını Kaydet", "", "HTML Dosyası (*.html)", options=options)
if file_name:
if not file_name.endswith(".html"):
file_name += ".html"
net.write_html(file_name)
self.show_message_box("Bilgi", "Topoloji oluşturma işlemi tamamlandı.", QMessageBox.Information)
self.submit_button.setEnabled(True)
if __name__ == "__main__":
app = QApplication(sys.argv)
window = MainWindow()
window.show()
sys.exit(app.exec_())
|
36f51fbf8591c4dcaebb7eec51914d71
|
{
"intermediate": 0.3094700574874878,
"beginner": 0.4666910767555237,
"expert": 0.2238388955593109
}
|
48,228
|
have this c# program that i want to turn it into a python program, trasnform this code into python, change all functions and c# specific functions to python's so there is no errors and everything works perfectyl + dont trasnform/implement the function that arent related to the hwid spoofer, i just want the hwid spoofer
using Microsoft.Win32;
using System;
using System.IO;
using System.Net.NetworkInformation;
using System.Text;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using System.ComponentModel;
using System.Runtime.InteropServices;
using System.Timers;
using Microsoft.Win32;
namespace HWID_Changer {
class Program {
public static void CheckRegistryKeys() {
try {
CheckRegistryKey("SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion", "InstallationID");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\ComputerName\\ComputerName", "ComputerName");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\ComputerName\\ComputerName", "ActiveComputerName");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\ComputerName\\ComputerNamePhysicalDnsDomain", "");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\ComputerName\\ActiveComputerName", "ComputerName");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\ComputerName\\ActiveComputerName", "ActiveComputerName");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\ComputerName\\ActiveComputerName", "ComputerNamePhysicalDnsDomain");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters", "Hostname");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters", "NV Hostname");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters\\Interfaces", "Hostname");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters\\Interfaces", "NV Hostname");
CheckRegistryKey("HARDWARE\\DEVICEMAP\\Scsi", ""); // ScsiPorts
CheckRegistryKey("HARDWARE\\DEVICEMAP\\Scsi\\{port}", ""); // ScsiBuses
CheckRegistryKey("HARDWARE\\DEVICEMAP\\Scsi\\{port}\\{bus}\\Target Id 0\\Logical Unit Id 0", "DeviceIdentifierPage");
CheckRegistryKey("HARDWARE\\DEVICEMAP\\Scsi\\{port}\\{bus}\\Target Id 0\\Logical Unit Id 0", "Identifier");
CheckRegistryKey("HARDWARE\\DEVICEMAP\\Scsi\\{port}\\{bus}\\Target Id 0\\Logical Unit Id 0", "InquiryData");
CheckRegistryKey("HARDWARE\\DEVICEMAP\\Scsi\\{port}\\{bus}\\Target Id 0\\Logical Unit Id 0", "SerialNumber");
CheckRegistryKey("HARDWARE\\DESCRIPTION\\System\\MultifunctionAdapter\\0\\DiskController\\0\\DiskPeripheral", ""); // DiskPeripherals
CheckRegistryKey("HARDWARE\\DESCRIPTION\\System\\MultifunctionAdapter\\0\\DiskController\\0\\DiskPeripheral\\{disk}", "Identifier");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\IDConfigDB\\Hardware Profiles\\0001", "HwProfileGuid");
CheckRegistryKey("SOFTWARE\\Microsoft\\Cryptography", "MachineGuid");
CheckRegistryKey("SOFTWARE\\Microsoft\\SQMClient", "MachineId");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\SystemInformation", "BIOSReleaseDate");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\SystemInformation", "BIOSVersion");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\SystemInformation", "ComputerHardwareId");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\SystemInformation", "ComputerHardwareIds");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\SystemInformation", "ComputerManufacturer");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\SystemInformation", "ComputerModel");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\SystemInformation", "InstallDate");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\SystemInformation", "SystemBiosMajorVersion");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\SystemInformation", "SystemBiosMinorVersion");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\SystemInformation", "SystemBiosVersion");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\SystemInformation", "SystemManufacturer");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\SystemInformation", "SystemProductName");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\SystemInformation", "SystemSku");
CheckRegistryKey("SYSTEM\\CurrentControlSet\\Control\\SystemInformation", "SystemVersion");
} catch (Exception ex) {
Console.WriteLine("Error to check the Registry-Key: " + ex.Message);
}
}
public static void CheckRegistryKey(string keyPath, string valueName) {
RegistryKey key = Registry.LocalMachine.OpenSubKey(keyPath);
if (key != null) {
if (!string.IsNullOrEmpty(valueName)) {
if (key.GetValue(valueName) == null) {
Console.WriteLine("Registry-Key not found: " + keyPath + "\\" + valueName);
}
} else {
if (key.SubKeyCount == 0) {
Console.WriteLine("Registry-Key not found: " + keyPath);
}
}
} else {
Console.WriteLine("Registry-Key not found: " + keyPath);
}
}
public static void SpoofInstallationID() {
using(RegistryKey key = Registry.LocalMachine.OpenSubKey("SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion", true)) {
if (key != null) {
string newInstallationID = Guid.NewGuid().ToString();
key.SetValue("InstallationID", newInstallationID);
key.Close();
}
}
}
public static void SpoofPCName() {
string randomName = RandomId(8); // Generate a random PC name
using RegistryKey computerName = Registry.LocalMachine.OpenSubKey("SYSTEM\\CurrentControlSet\\Control\\ComputerName\\ComputerName", true);
computerName.SetValue("ComputerName", randomName);
computerName.SetValue("ActiveComputerName", randomName);
computerName.SetValue("ComputerNamePhysicalDnsDomain", "");
using RegistryKey activeComputerName = Registry.LocalMachine.OpenSubKey("SYSTEM\\CurrentControlSet\\Control\\ComputerName\\ActiveComputerName", true);
activeComputerName.SetValue("ComputerName", randomName);
activeComputerName.SetValue("ActiveComputerName", randomName);
activeComputerName.SetValue("ComputerNamePhysicalDnsDomain", "");
using RegistryKey tcpipParams = Registry.LocalMachine.OpenSubKey("SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters", true);
tcpipParams.SetValue("Hostname", randomName);
tcpipParams.SetValue("NV Hostname", randomName);
using RegistryKey tcpipInterfaces = Registry.LocalMachine.OpenSubKey("SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters\\Interfaces", true);
foreach(string interfaceKey in tcpipInterfaces.GetSubKeyNames()) {
using RegistryKey interfaceSubKey = tcpipInterfaces.OpenSubKey(interfaceKey, true);
interfaceSubKey.SetValue("Hostname", randomName);
interfaceSubKey.SetValue("NV Hostname", randomName);
}
}
public static string RandomId(int length) {
string chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789";
string result = "";
Random random = new Random();
for (int i = 0; i < length; i++) {
result += chars[random.Next(chars.Length)];
}
return result;
}
public static string RandomMac() {
string chars = "ABCDEF0123456789";
string windows = "26AE";
string result = "";
Random random = new Random();
result += chars[random.Next(chars.Length)];
result += windows[random.Next(windows.Length)];
for (int i = 0; i < 5; i++) {
result += "-";
result += chars[random.Next(chars.Length)];
result += chars[random.Next(chars.Length)];
}
return result;
}
public static void Enable_LocalAreaConection(string adapterId, bool enable = true) {
string interfaceName = "Ethernet";
foreach(NetworkInterface i in NetworkInterface.GetAllNetworkInterfaces()) {
if (i.Id == adapterId) {
interfaceName = i.Name;
break;
}
}
string control;
if (enable)
control = "enable";
else
control = "disable";
System.Diagnostics.ProcessStartInfo psi = new System.Diagnostics.ProcessStartInfo("netsh", $ "interface set interface \"{interfaceName}\" {control}");
System.Diagnostics.Process p = new System.Diagnostics.Process();
p.StartInfo = psi;
p.Start();
p.WaitForExit();
}
public static void SpoofDisks() {
using RegistryKey ScsiPorts = Registry.LocalMachine.OpenSubKey("HARDWARE\\DEVICEMAP\\Scsi");
foreach(string port in ScsiPorts.GetSubKeyNames()) {
using RegistryKey ScsiBuses = Registry.LocalMachine.OpenSubKey($"HARDWARE\\DEVICEMAP\\Scsi\\{port}");
foreach(string bus in ScsiBuses.GetSubKeyNames()) {
using RegistryKey ScsuiBus = Registry.LocalMachine.OpenSubKey($"HARDWARE\\DEVICEMAP\\Scsi\\{port}\\{bus}\\Target Id 0\\Logical Unit Id 0", true);
if (ScsuiBus != null) {
if (ScsuiBus.GetValue("DeviceType").ToString() == "DiskPeripheral") {
string identifier = RandomId(14);
string serialNumber = RandomId(14);
ScsuiBus.SetValue("DeviceIdentifierPage", Encoding.UTF8.GetBytes(serialNumber));
ScsuiBus.SetValue("Identifier", identifier);
ScsuiBus.SetValue("InquiryData", Encoding.UTF8.GetBytes(identifier));
ScsuiBus.SetValue("SerialNumber", serialNumber);
}
}
}
}
using RegistryKey DiskPeripherals = Registry.LocalMachine.OpenSubKey("HARDWARE\\DESCRIPTION\\System\\MultifunctionAdapter\\0\\DiskController\\0\\DiskPeripheral");
foreach(string disk in DiskPeripherals.GetSubKeyNames()) {
using RegistryKey DiskPeripheral = Registry.LocalMachine.OpenSubKey($"HARDWARE\\DESCRIPTION\\System\\MultifunctionAdapter\\0\\DiskController\\0\\DiskPeripheral\\{disk}", true);
DiskPeripheral.SetValue("Identifier", $ "{RandomId(8)}-{RandomId(8)}-A");
}
}
public static void SpoofGUIDs() {
using RegistryKey HardwareGUID = Registry.LocalMachine.OpenSubKey("SYSTEM\\CurrentControlSet\\Control\\IDConfigDB\\Hardware Profiles\\0001", true);
HardwareGUID.SetValue("HwProfileGuid", $ "{{{Guid.NewGuid()}}}");
using RegistryKey MachineGUID = Registry.LocalMachine.OpenSubKey("SOFTWARE\\Microsoft\\Cryptography", true);
MachineGUID.SetValue("MachineGuid", Guid.NewGuid().ToString());
using RegistryKey MachineId = Registry.LocalMachine.OpenSubKey("SOFTWARE\\Microsoft\\SQMClient", true);
MachineId.SetValue("MachineId", $ "{{{Guid.NewGuid()}}}");
using RegistryKey SystemInfo = Registry.LocalMachine.OpenSubKey("SYSTEM\\CurrentControlSet\\Control\\SystemInformation", true);
Random rnd = new Random();
int day = rnd.Next(1, 31);
string dayStr = "";
if (day < 10) dayStr = $ "0{day}";
else dayStr = day.ToString();
int month = rnd.Next(1, 13);
string monthStr = "";
if (month < 10) monthStr = $ "0{month}";
else monthStr = month.ToString();
SystemInfo.SetValue("BIOSReleaseDate", $ "{dayStr}/{monthStr}/{rnd.Next(2000, 2023)}");
SystemInfo.SetValue("BIOSVersion", RandomId(10));
SystemInfo.SetValue("ComputerHardwareId", $ "{{{Guid.NewGuid()}}}");
SystemInfo.SetValue("ComputerHardwareIds", $ "{{{Guid.NewGuid()}}}\n{{{Guid.NewGuid()}}}\n{{{Guid.NewGuid()}}}\n");
SystemInfo.SetValue("SystemManufacturer", RandomId(15));
SystemInfo.SetValue("SystemProductName", RandomId(6));
using RegistryKey Update = Registry.LocalMachine.OpenSubKey("SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\WindowsUpdate", true);
Update.SetValue("SusClientId", Guid.NewGuid().ToString());
Update.SetValue("SusClientIdValidation", Encoding.UTF8.GetBytes(RandomId(25)));
}
public static void UbisoftCache() {
string appDataPath = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData);
string ubisoftPath = Path.Combine("Ubisoft Game Launcher", "cache");
string ubisoftLogsPath = Path.Combine("Ubisoft Game Launcher", "logs");
string ubisoftSavegamesPath = Path.Combine("Ubisoft Game Launcher", "savegames");
string ubisoftSpoolPath = Path.Combine("Ubisoft Game Launcher", "spool");
DirectoryInfo di = new DirectoryInfo(Path.Combine("C:", "Program Files (x86)", "Ubisoft", ubisoftPath));
DirectoryInfo di2 = new DirectoryInfo(Path.Combine("C:", "Program Files (x86)", "Ubisoft", ubisoftLogsPath));
DirectoryInfo di3 = new DirectoryInfo(Path.Combine("C:", "Program Files (x86)", "Ubisoft", ubisoftSavegamesPath));
DirectoryInfo di4 = new DirectoryInfo(Path.Combine(appDataPath, "Ubisoft Game Launcher", ubisoftSpoolPath));
foreach(FileInfo file in di.GetFiles()) {
file.Delete();
}
foreach(DirectoryInfo dir in di.GetDirectories()) {
dir.Delete(true);
}
foreach(FileInfo file in di2.GetFiles()) {
file.Delete();
}
foreach(DirectoryInfo dir in di2.GetDirectories()) {
dir.Delete(true);
}
foreach(FileInfo file in di3.GetFiles()) {
file.Delete();
}
foreach(DirectoryInfo dir in di3.GetDirectories()) {
dir.Delete(true);
}
foreach(FileInfo file in di4.GetFiles()) {
file.Delete();
}
foreach(DirectoryInfo dir in di4.GetDirectories()) {
dir.Delete(true);
}
}
public static void DeleteValorantCache() {
string valorantPath = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData) + "\\VALORANT\\saved";
if (Directory.Exists(valorantPath)) {
DirectoryInfo di = new DirectoryInfo(valorantPath);
foreach(FileInfo file in di.GetFiles()) {
file.Delete();
}
foreach(DirectoryInfo dir in di.GetDirectories()) {
dir.Delete(true);
}
}
}
public static bool SpoofMAC() //SpoofMacREAL
{
bool err = false;
using RegistryKey NetworkAdapters = Registry.LocalMachine.OpenSubKey("SYSTEM\\CurrentControlSet\\Control\\Class\\{4d36e972-e325-11ce-bfc1-08002be10318}");
foreach(string adapter in NetworkAdapters.GetSubKeyNames()) {
if (adapter != "Properties") {
try {
using RegistryKey NetworkAdapter = Registry.LocalMachine.OpenSubKey($"SYSTEM\\CurrentControlSet\\Control\\Class\\{{4d36e972-e325-11ce-bfc1-08002be10318}}\\{adapter}", true);
if (NetworkAdapter.GetValue("BusType") != null) {
NetworkAdapter.SetValue("NetworkAddress", RandomMac());
string adapterId = NetworkAdapter.GetValue("NetCfgInstanceId").ToString();
Enable_LocalAreaConection(adapterId, false);
Enable_LocalAreaConection(adapterId, true);
}
} catch (System.Security.SecurityException ex) {
Console.WriteLine("\n[X] Start the spoofer in admin mode to spoof your MAC address!");
err = true;
break;
}
}
}
return err;
}
public static void SpoofGPU() {
string keyName = @ "SYSTEM\CurrentControlSet\Enum\PCI\VEN_10DE&DEV_0DE1&SUBSYS_37621462&REV_A1";
using(RegistryKey key = Registry.LocalMachine.OpenSubKey(keyName, true)) {
if (key != null) {
string newHardwareID = "PCIVEN_8086&DEV_1234&SUBSYS_5678ABCD&REV_01";
string oldHardwareID = key.GetValue("HardwareID") as string;
key.SetValue("HardwareID", newHardwareID);
key.SetValue("CompatibleIDs", new string[] {
newHardwareID
});
key.SetValue("Driver", "pci.sys");
key.SetValue("ConfigFlags", 0x00000000, RegistryValueKind.DWord);
key.SetValue("ClassGUID", "{4d36e968-e325-11ce-bfc1-08002be10318}");
key.SetValue("Class", "Display");
key.Close();
}
}
}
public static void SpoofEFIVariableId() {
try {
RegistryKey efiVariables = Registry.LocalMachine.OpenSubKey("SYSTEM\\CurrentControlSet\\Control\\Nsi\\{eb004a03-9b1a-11d4-9123-0050047759bc}\\26", true);
if (efiVariables != null) {
string efiVariableId = Guid.NewGuid().ToString();
efiVariables.SetValue("VariableId", efiVariableId);
efiVariables.Close();
}
} catch (Exception) {
Console.WriteLine("\n[X] Start the spoofer in admin mode to spoof your MAC address!");
}
}
public static void SpoofSMBIOSSerialNumber() {
try {
RegistryKey smbiosData = Registry.LocalMachine.OpenSubKey("HARDWARE\\DESCRIPTION\\System\\BIOS", true);
if (smbiosData != null) {
string serialNumber = RandomId(10);
smbiosData.SetValue("SystemSerialNumber", serialNumber);
smbiosData.Close();
} else {
Console.WriteLine("\n[X] Cant find the SMBIOS");
}
} catch (Exception) {
Console.WriteLine("\n[X] Start the spoofer in admin mode to spoof your MAC address!");
}
}
public static void DisplaySystemData() {
Console.WriteLine("System Data:");
Console.WriteLine("------------------------------------------------");
try {
// Display HWID
using(RegistryKey key = Registry.LocalMachine.OpenSubKey("SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion", true)) {
string installationID = key.GetValue("InstallationID") as string;
Console.WriteLine("HWID: " + installationID);
}
} catch (Exception ex) {
Console.WriteLine("Error retrieving HWID: " + ex.Message);
}
try {
// Display GUIDs
using(RegistryKey machineGuidKey = Registry.LocalMachine.OpenSubKey("SOFTWARE\\Microsoft\\Cryptography")) {
string machineGuid = machineGuidKey.GetValue("MachineGuid") as string;
Console.WriteLine("Machine GUID: " + machineGuid);
}
} catch (Exception ex) {
Console.WriteLine("Error retrieving Machine GUID: " + ex.Message);
}
try {
// Display MAC ID
foreach(NetworkInterface networkInterface in NetworkInterface.GetAllNetworkInterfaces()) {
PhysicalAddress physicalAddress = networkInterface.GetPhysicalAddress();
Console.WriteLine("MAC ID (" + networkInterface.Name + "): " + physicalAddress.ToString());
}
} catch (Exception ex) {
Console.WriteLine("Error retrieving MAC ID: " + ex.Message);
}
try {
// Display Installation ID
using(RegistryKey key = Registry.LocalMachine.OpenSubKey("SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion", true)) {
string installationID = key.GetValue("InstallationID") as string;
Console.WriteLine("Installation ID: " + installationID);
}
} catch (Exception ex) {
Console.WriteLine("Error retrieving Installation ID: " + ex.Message);
}
try {
// Display PC Name
using(RegistryKey computerName = Registry.LocalMachine.OpenSubKey("SYSTEM\\CurrentControlSet\\Control\\ComputerName\\ComputerName")) {
string pcName = computerName.GetValue("ComputerName") as string;
Console.WriteLine("PC Name: " + pcName);
}
} catch (Exception ex) {
Console.WriteLine("Error retrieving PC Name: " + ex.Message);
}
try {
// Display GPU ID
using(RegistryKey gpuKey = Registry.LocalMachine.OpenSubKey(@ "SYSTEM\CurrentControlSet\Enum\PCI\VEN_10DE&DEV_0DE1&SUBSYS_37621462&REV_A1")) {
string hardwareID = gpuKey.GetValue("HardwareID") as string;
Console.WriteLine("GPU ID: " + hardwareID);
}
} catch (Exception ex) {
Console.WriteLine("Error retrieving GPU ID: " + ex.Message);
}
try {
// Display CPU Information
string cpuInfo = string.Empty;
using(StreamReader reader = new StreamReader(@ "C:\proc\cpuinfo")) {
cpuInfo = reader.ReadToEnd();
}
Console.WriteLine("CPU Information: " + cpuInfo);
} catch (Exception ex) {
Console.WriteLine("Error retrieving CPU Information: " + ex.Message);
}
try {
// Display Memory Information
using(StreamReader reader = new StreamReader("/proc/meminfo")) {
string line;
while ((line = reader.ReadLine()) != null) {
Console.WriteLine("Memory Information: " + line);
}
}
} catch (Exception ex)
{
Console.WriteLine("Error retrieving Memory Information: " + ex.Message);
}
}
public static void Menu() {
Console.WriteLine("\n SecHex");
Console.Write(" Select an option: ");
string input = Console.ReadLine();
switch (input) {
case "1":
// Spoof disks
SpoofDisks();
Console.WriteLine("\n [+] Disks spoofed");
ClearConsoleAfterDelay();
Menu();
break;
case "2":
// Spoof GUIDs
SpoofGUIDs();
Console.WriteLine("\n [+] GUIDs spoofed");
ClearConsoleAfterDelay();
Menu();
break;
case "3":
// Spoof MAC address
SpoofMAC();
Console.WriteLine(" [+] MAC address spoofed");
ClearConsoleAfterDelay();
Menu();
break;
case "4":
// Delete Ubisoft cache
UbisoftCache();
Console.WriteLine("\n [+] Ubisoft Cache deleted");
ClearConsoleAfterDelay();
Menu();
break;
case "5":
// Delete Valorant cache
DeleteValorantCache();
Console.WriteLine("\n [+] Valorant Cache deleted");
ClearConsoleAfterDelay();
Menu();
break;
case "6":
// Spoof GPU ID
SpoofGPU();
Console.WriteLine("\n [+] GPU ID Spoofed");
ClearConsoleAfterDelay();
Menu();
break;
case "7":
// Spoof PC Name
SpoofPCName();
Console.WriteLine("\n [+] PC name spoofed");
ClearConsoleAfterDelay();
Menu();
break;
case "8":
// Spoof Installation ID
SpoofInstallationID();
Console.WriteLine("\n [+] Installation ID spoofed");
ClearConsoleAfterDelay();
Menu();
break;
case "9":
// Spoof EFI
SpoofEFIVariableId();
ClearConsoleAfterDelay();
Menu();
break;
case "10":
// Spoof smbios
SpoofSMBIOSSerialNumber();
ClearConsoleAfterDelay();
Menu();
break;
case "11":
// Check registry
CheckRegistryKeys();
ClearConsoleAfterDelay2();
Menu();
break;
case "12":
// get sys data
DisplaySystemData();
ClearConsoleAfterDelay();
Menu();
break;
case "13":
// Spoof all
SpoofDisks();
SpoofGUIDs();
SpoofMAC();
UbisoftCache();
DeleteValorantCache();
SpoofGPU();
SpoofPCName();
SpoofEFIVariableId();
SpoofInstallationID();
Console.WriteLine("\n [+] All commands executed");
ClearConsoleAfterDelay();
Menu();
break;
case "exit":
Environment.Exit(0);
break;
default:
Console.WriteLine("\n [X] Invalid option!");
ClearConsoleAfterDelay();
Menu();
break;
}
}
private static void ClearConsoleAfterDelay() {
Thread.Sleep(3000);
Console.Clear();
Main();
}
private static void ClearConsoleAfterDelay2() {
Thread.Sleep(6000);
Console.Clear();
Main();
}
static void Main() {
Console.Title = "SecHex | V1.3 | Open Source | github/SecHex";
Console.ForegroundColor = ConsoleColor.Magenta;
Console.Clear();
Console.WriteLine("███████╗██████╗ ██████╗ ██████╗ ███████╗██╗ ██╗ ");
Console.WriteLine("██╔════╝██╔══██╗██╔═══██╗██╔═══██╗██╔════╝╚██╗ ██╔╝ ");
Console.WriteLine("███████╗██████╔╝██║ ██║██║ ██║█████╗ ╚████╔╝ ");
Console.WriteLine("╚════██║██╔═══╝ ██║ ██║██║ ██║██╔══╝ ╚██╔╝ ");
Console.WriteLine("███████║██║ ╚██████╔╝╚██████╔╝██║ ██║ ");
Console.WriteLine("╚══════╝╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ");
Console.WriteLine("https://github.com/SecHex ");
Console.WriteLine(" ");
Console.WriteLine("[1] Spoof HWID [7] Spoof PC Name ");
Console.WriteLine("[2] Spoof GUID [8] Spoof Installation ID ");
Console.WriteLine("[3] Spoof MAC ID [9] Spoof EFI ");
Console.WriteLine("[4] Delete UBI Cache [10] Spoof SMBIOS ");
Console.WriteLine("[5] Delete Valorant Cache ");
Console.WriteLine("[6] Spoof GPU ID ");
Console.WriteLine(" ");
Console.WriteLine("[11] Check Registry Paths ");
Console.WriteLine("[12] Get System informations ");
Console.WriteLine("[13] Spoof all ");
Console.WriteLine("[exit] Exit ");
Console.WriteLine(" ");
Console.ForegroundColor = ConsoleColor.Green;
Menu();
}
}
}
|
f05ac356f22887ea2d8398b8c7e5db97
|
{
"intermediate": 0.3346724510192871,
"beginner": 0.316365510225296,
"expert": 0.34896209836006165
}
|
48,229
|
print("Veuillez entrer un nombre valide.")
Voici l'erreur obtenue :
line 438, in main
allowable_duration = video_duration - (starting_offset_seconds + ending_offset_seconds)
UnboundLocalError: local variable 'video_duration' referenced before assignment
|
1ce59a9b742c388dd0e4c23bd784db01
|
{
"intermediate": 0.16799010336399078,
"beginner": 0.6865079998970032,
"expert": 0.14550191164016724
}
|
48,231
|
Consider the following
1. 10 letter words which are composed of only the letters A and B are mutated by removing a single letter from a fixed position.
2. Three examples of these words and their mutated words will be given.
Write a python script that does the following
1. It should ask for all the three words and their mutated words. They are given in pair of word and it's corresponding mutated word seperated by a space. Each pair must be asked individually.
2. Then from the given examples identify the position of the removed letter within the 10 letter words and show the position.
|
d62d74eb7b3a902ef36661a556420f0b
|
{
"intermediate": 0.40879806876182556,
"beginner": 0.21827247738838196,
"expert": 0.37292951345443726
}
|
48,232
|
hi
|
6df5a42d0485935d1445faede6209c62
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
48,233
|
hi
|
c2bc0088b8918a7235329b4b212d5e8e
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
48,234
|
This is my df:
refseq_acc accs path
0 GCF_000146045.2 GCF_000146045 https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/0...
1 GCF_000002985.3 GCF_000002985 https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/0...
2 GCF_000001735.2 GCF_000001735 https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/0...
the "path" columns has paths that lead to something like this:
Name Last modified Size
Parent Directory -
GCF_000002285.1_CanFam2.0/ 2024-04-27 01:04 -
GCF_000002285.2_CanFam2.0/ 2024-04-27 01:38 -
GCF_000002285.3_CanFam3.1/ 2024-04-27 01:34 -
GCF_000002285.5_Dog10K_Boxer_Tasha/ 2024-04-27 15:23 -
how can i loop through all the initial dataframe efficiently and fast and get all the directories?
|
c470b3f3403c20f30e8b026702aa58f8
|
{
"intermediate": 0.4400245249271393,
"beginner": 0.21353554725646973,
"expert": 0.3464398980140686
}
|
48,235
|
import numpy as np
import pandas as pd
from backtesting import Strategy, Backtest
import talib
import MetaTrader5 as mt5
if not mt5.initialize():
print("initialize() failed, error code =", mt5.last_error())
mt5.shutdown()
symbol = 'XAUUSDm'
timeframe = mt5.TIMEFRAME_M5
rates = mt5.copy_rates_from_pos(symbol, timeframe, 0, 5000)
df = pd.DataFrame(rates)
df['time'] = pd.to_datetime(df['time'], unit='s')
df.set_index('time', inplace=True)
df.drop(['spread', 'real_volume'], axis=1, inplace=True)
df['macd'] = 0
df['trf'] = 0
df['engulfing'] = 0
print(df)
def macd(df):
df = df.copy()
macd, signal, hist = talib.MACD(df['close'], fastperiod=12, slowperiod=26, signalperiod=9) # Corrected
if macd.iloc[-1] > signal.iloc[-1]:
return 1
elif macd.iloc[-1] < signal.iloc[-1]:
return 2
else:
return 0
def twin_range_filter(df):
df = df.copy()
close = df['close']
def smoothrng(x, t, m):
wper = t * 2 - 1
avrng = talib.EMA(np.abs(x.diff()), timeperiod=t)
smoothrng = talib.EMA(avrng, timeperiod=wper) * m
return smoothrng
per1, mult1, per2, mult2 = 27, 1.6, 55, 2.0
smrng1 = smoothrng(close, per1, mult1)
smrng2 = smoothrng(close, per2, mult2)
smrng = (smrng1 + smrng2) / 2
def rngfilt(x, r):
rngfilt = x.copy()
for i in range(1, len(x)):
prev_val = rngfilt.iloc[i-1]
if x.iloc[i] > prev_val:
rngfilt.iloc[i] = max(prev_val, x.iloc[i] - r.iloc[i])
else:
rngfilt.iloc[i] = min(prev_val, x.iloc[i] + r.iloc[i])
return rngfilt
filt = rngfilt(close, smrng)
STR = filt + smrng
STS = filt - smrng
FUB = [STR.iloc[0]]
FLB = [STS.iloc[0]]
for i in range(1, len(df)):
FUB.append(STR.iloc[i] if (STR.iloc[i] < STR.iloc[i-1]) or (close.iloc[i-1] > FUB[i-1]) else FUB[i-1])
FLB.append(STS.iloc[i] if (STS.iloc[i] > STS.iloc[i-1]) or (close.iloc[i-1] < FLB[i-1]) else FLB[i-1])
FUB = np.array(FUB)
FLB = np.array(FLB)
TRF = [FUB[0]]
for i in range(1, len(df)):
last_trf = TRF[-1]
if (last_trf == FUB[i-1] and close.iloc[i] <= FUB[i]) or (last_trf == FLB[i-1] and close.iloc[i] <= FLB[i]):
TRF.append(FUB[i])
elif (last_trf == FUB[i-1] and close.iloc[i] >= FUB[i]) or (last_trf == FLB[i-1] and close.iloc[i] >= FLB[i]):
TRF.append(FLB[i])
else:
TRF.append(FUB[i])
TRF = np.array(TRF)
long_signal = (close > np.roll(TRF, 1))[1:]
short_signal = (close < np.roll(TRF, 1))[1:]
df['TRF'] = TRF
df['long_signal'] = np.append([False], long_signal)
df['short_signal'] = np.append([False], short_signal)
if df.iloc[-1]['long_signal']:
return 1
elif df.iloc[-1]['short_signal']:
return 2
def detect_engulfing(df):
df = df.copy()
for i in range(1, len(df)):
current = df.iloc[i].copy()
previous = df.iloc[i-1].copy()
if np.abs(current['open'] - previous['close']) > 0.005:
current['open'] = previous['close']
if previous['open'] > previous['close'] and \
current['close'] > current['open'] and \
current['close'] >= previous['open'] and \
previous['close'] >= current['open'] and \
current['close'] - current['open'] > previous['open'] - previous['close']:
return 1
elif previous['close'] > previous['open'] and \
current['open'] > current['close'] and \
current['open'] >= previous['close'] and \
previous['open'] >= current['close'] and \
current['open'] - current['close'] > previous['close'] - previous['open']:
return 2
else:
return 0
df['macd'] = df.apply(lambda x: macd(df.loc[:x.name]), axis=1)
df['trf'] = df.apply(lambda x: twin_range_filter(df.loc[:x.name]), axis=1)
df['engulfing'] = df.apply(lambda x: detect_engulfing(df.loc[:x.name]), axis=1)
df['signal'] = np.where((df['macd'] == 2) & (df['trf'] == 1) & (df['engulfing'] == 1), 1,
np.where((df['macd'] == 1) & (df['trf'] == 2) & (df['engulfing'] == 2), 2, 0))
count_sell_signals = df[df['signal'] == 2].shape[0]
print("Jumlah sinyal sell:", count_sell_signals)
df.columns = ['Local time', 'Open', 'High', 'Low', 'Close', 'Volume', 'signal']
df = df.iloc[100:200]
# print(df)
# def SIGNAL():
# return df.signal
class ChubbStrategy(Strategy):
def init(self):
super().init()
self.signal1 = self.I(lambda: df['signal'])
def next(self):
super().next()
if self.signal1==2:
sl1 = self.data.Close[-1] - 750e-4
tp1 = self.data.Close[-1] + 600e-4
self.buy(sl=sl1, tp=tp1)
elif self.signal1==1:
sl1 = self.data.Close[-1] + 750e-4
tp1 = self.data.Close[-1] - 600e-4
self.sell(sl=sl1, tp=tp1)
bt = Backtest(df, ChubbStrategy, cash=10000, commission=0.0)
# Run the backtest
stats = bt.run()
# Print the performance statistics
print(stats)
|
7cf11adc6806894c3efacde35588b9a7
|
{
"intermediate": 0.4325147569179535,
"beginner": 0.2997351884841919,
"expert": 0.2677500247955322
}
|
48,236
|
Consider this problem
1. There are a given number of slots and each slot can stack at most 4 blocks in it.
2. Each slot may contain any number of blocks stacked in it upto maximum or it may even be empty.
3. Each block can only be one of two possible types, R and B.
4. There are only 4 R blocks but the number of B blocks can vary
5. The problem is to find the moves necessary to obtain all the 4 R blocks in any single slot.
6. The only permitted move is to move any block from the top of any slot which has blocks onto any non fully filled slot.
Write a python script that does the following
1. Asks for the total number of slots.
2. Asks for the number of non empty slots.
3. Then asks for the boxes in each non empty slot it is given as string formed by joining the block types together from top to bottom.
4. Show the moves that can solve the above problem.
|
1ee085ad3bfd1e305f20ab94d7b6e0b4
|
{
"intermediate": 0.3041556477546692,
"beginner": 0.20918036997318268,
"expert": 0.48666396737098694
}
|
48,237
|
import numpy as np
import pandas as pd
from backtesting import Strategy, Backtest
import talib
import MetaTrader5 as mt5
if not mt5.initialize():
print("initialize() failed, error code =", mt5.last_error())
mt5.shutdown()
symbol = 'XAUUSDm'
timeframe = mt5.TIMEFRAME_M5
rates = mt5.copy_rates_from_pos(symbol, timeframe, 0, 5000)
df = pd.DataFrame(rates)
df['time'] = pd.to_datetime(df['time'], unit='s')
df.set_index('time', inplace=True)
df.drop(['spread', 'real_volume'], axis=1, inplace=True)
df['macd'] = 0
df['trf'] = 0
df['engulfing'] = 0
print(df)
def macd(df):
df = df.copy()
macd, signal, hist = talib.MACD(df['close'], fastperiod=12, slowperiod=26, signalperiod=9) # Corrected
if macd.iloc[-1] > signal.iloc[-1]:
return 1
elif macd.iloc[-1] < signal.iloc[-1]:
return 2
else:
return 0
def twin_range_filter(df):
df = df.copy()
close = df['close']
def smoothrng(x, t, m):
wper = t * 2 - 1
avrng = talib.EMA(np.abs(x.diff()), timeperiod=t)
smoothrng = talib.EMA(avrng, timeperiod=wper) * m
return smoothrng
per1, mult1, per2, mult2 = 27, 1.6, 55, 2.0
smrng1 = smoothrng(close, per1, mult1)
smrng2 = smoothrng(close, per2, mult2)
smrng = (smrng1 + smrng2) / 2
def rngfilt(x, r):
rngfilt = x.copy()
for i in range(1, len(x)):
prev_val = rngfilt.iloc[i-1]
if x.iloc[i] > prev_val:
rngfilt.iloc[i] = max(prev_val, x.iloc[i] - r.iloc[i])
else:
rngfilt.iloc[i] = min(prev_val, x.iloc[i] + r.iloc[i])
return rngfilt
filt = rngfilt(close, smrng)
STR = filt + smrng
STS = filt - smrng
FUB = [STR.iloc[0]]
FLB = [STS.iloc[0]]
for i in range(1, len(df)):
FUB.append(STR.iloc[i] if (STR.iloc[i] < STR.iloc[i-1]) or (close.iloc[i-1] > FUB[i-1]) else FUB[i-1])
FLB.append(STS.iloc[i] if (STS.iloc[i] > STS.iloc[i-1]) or (close.iloc[i-1] < FLB[i-1]) else FLB[i-1])
FUB = np.array(FUB)
FLB = np.array(FLB)
TRF = [FUB[0]]
for i in range(1, len(df)):
last_trf = TRF[-1]
if (last_trf == FUB[i-1] and close.iloc[i] <= FUB[i]) or (last_trf == FLB[i-1] and close.iloc[i] <= FLB[i]):
TRF.append(FUB[i])
elif (last_trf == FUB[i-1] and close.iloc[i] >= FUB[i]) or (last_trf == FLB[i-1] and close.iloc[i] >= FLB[i]):
TRF.append(FLB[i])
else:
TRF.append(FUB[i])
TRF = np.array(TRF)
long_signal = (close > np.roll(TRF, 1))[1:]
short_signal = (close < np.roll(TRF, 1))[1:]
df['TRF'] = TRF
df['long_signal'] = np.append([False], long_signal)
df['short_signal'] = np.append([False], short_signal)
if df.iloc[-1]['long_signal']:
return 1
elif df.iloc[-1]['short_signal']:
return 2
def detect_engulfing(df):
df = df.copy()
for i in range(1, len(df)):
current = df.iloc[i].copy()
previous = df.iloc[i-1].copy()
if np.abs(current['open'] - previous['close']) > 0.005:
current['open'] = previous['close']
if previous['open'] > previous['close'] and \
current['close'] > current['open'] and \
current['close'] >= previous['open'] and \
previous['close'] >= current['open'] and \
current['close'] - current['open'] > previous['open'] - previous['close']:
return 1
elif previous['close'] > previous['open'] and \
current['open'] > current['close'] and \
current['open'] >= previous['close'] and \
previous['open'] >= current['close'] and \
current['open'] - current['close'] > previous['close'] - previous['open']:
return 2
else:
return 0
df['macd'] = df.apply(lambda x: macd(df.loc[:x.name]), axis=1)
df['trf'] = df.apply(lambda x: twin_range_filter(df.loc[:x.name]), axis=1)
df['engulfing'] = df.apply(lambda x: detect_engulfing(df.loc[:x.name]), axis=1)
df['signal'] = np.where((df['macd'] == 2) & (df['trf'] == 1) & (df['engulfing'] == 1), 1,
np.where((df['macd'] == 1) & (df['trf'] == 2) & (df['engulfing'] == 2), 2, 0))
count_sell_signals = df[df['signal'] == 2].shape[0]
print("Jumlah sinyal sell:", count_sell_signals)
df.columns = ['Local time', 'Open', 'High', 'Low', 'Close', 'Volume', 'signal']
df = df.iloc[100:200]
# print(df)
# def SIGNAL():
# return df.signal
class ChubbStrategy(Strategy):
def init(self):
super().init()
self.signal1 = self.I(lambda: df['signal'])
def next(self):
super().next()
if self.signal1==2:
sl1 = self.data.Close[-1] - 750e-4
tp1 = self.data.Close[-1] + 600e-4
self.buy(sl=sl1, tp=tp1)
elif self.signal1==1:
sl1 = self.data.Close[-1] + 750e-4
tp1 = self.data.Close[-1] - 600e-4
self.sell(sl=sl1, tp=tp1)
bt = Backtest(df, ChubbStrategy, cash=10000, commission=0.0)
# Run the backtest
stats = bt.run()
# Print the performance statistics
print(stats)
|
c00b4a02ddf2b26acb2cb4268d7df430
|
{
"intermediate": 0.4325147569179535,
"beginner": 0.2997351884841919,
"expert": 0.2677500247955322
}
|
48,238
|
this is my function:
def get_directories(url):
"""
Retrieves a list of directories from the given URL.
"""
try:
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
directories = [a.get_text() for a in soup.find_all('a') if a.get_text().endswith('/')]
except Exception as e:
print(f"Error fetching directories for {url}: {str(e)}")
return directories
and I will use like this:
[get_directories(x) for x in rf["path"]]
where rf["path] has ~30k strings. how can I make this process to run the fastest way possible?
|
2dffe08541ca4a4b4683ecf086cd0835
|
{
"intermediate": 0.4417959153652191,
"beginner": 0.3881739377975464,
"expert": 0.17003017663955688
}
|
48,239
|
whats the difference from gpt4 turbo vs chatsgpts free version on there site 3.5
|
9a85d3e1f6a8da8207b1b814828b8147
|
{
"intermediate": 0.30103275179862976,
"beginner": 0.24327252805233002,
"expert": 0.4556947648525238
}
|
48,240
|
what is a boundary of Content-type?
|
026243b03261c69ac4b0723d7e23106f
|
{
"intermediate": 0.303324431180954,
"beginner": 0.371018648147583,
"expert": 0.3256569504737854
}
|
48,241
|
Identify three literary devices O. Henry uses in his short story "The Last Leaf," and describe how they help create the tone of the story.
Remember:
-The tone is the way the
author expresses their attitude through writing.
-An author's tone can remain the same or change
throughout the story.
-Tone can be somber, happy,
depressing, angry, exuberant,
comic, serious, ironic, light, playful, etc.
Common Literary Devices: -Metaphor: comparison of two unlike things
-Simile: comparison of two
unlike things using "as" or "like"
-Personification: giving
human qualities to inanimate
objects
-Hyperbole: using overstatement for special effect
-Paradox: something that seems nonsensical or self- contradictory on the surface but on close examination expresses an underlying truth
Write at least 300 words.
Excerpt from "The Last Leaf" by O. Henry
Toward winter a cold stranger entered Greenwich Village. No one could see him. He walked around touching one person here and another there with his icy fingers. He was a bad sickness. Doctors called him Pneumonia. On the east side of the city he hurried, touching many people; but in the
narrow streets of Greenwich
Village he did not move so quickly.
Mr. Pneumonia was not a nice old gentleman. A nice old gentleman would not hurt a weak little woman from
California. But Mr. Pneumonia
touched Johnsy with his cold
fingers. She lay on her bed almost without moving, and she looked through the window at the wall of the house next to hers.
One morning the busy doctor spoke to Sue alone in the hall, where Johnsy could not hear.
"She has a very small chance," he said. "She has a chance, if she wants to live. If people don't want to live, I can't do much for them. Your little lady has decided that she is not going to get well. Is there something that is troubling her?"
"Paint! Not paint. Is there anything worth being troubled about? A man?"
"A man?" said Sue. "Is a man worth-No, doctor. There is not a man."
"It is weakness," said the doctor. "I will do all I know how to do. But when a sick person begins to feel that he's going to die, half my work is useless. Talk to her about new winter clothes. If she were interested in the future, her chances would be better."
After the doctor had gone, Sue went into the workroom to cry. Then she walked into Johnsy's room. She carried some of her painting materials, and she was singing.
Johnsy lay there, very thin and very quiet. Her face was
turned toward the window. Sue stopped singing, thinking that Johnsy was asleep. Sue began to work. As she
worked she heard a low
sound, again and again. She went quickly to the bedside. Johnsy's eyes were open wide. She was looking out the window and counting- counting back.
"Twelve," she said; and a little later, "Eleven"; and then, "Ten," and, "Nine"; and then, "Eight," and, "Seven," almost together.
Sue looked out the window. What was there to count? There was only the side wall of the next house, a short distance away. The wall had no window. An old, old tree
grew against the wall. The cold breath
of winter had already touched it. Almost all its leaves had fallen from its dark branches.
"What is it, dear?" asked Sue.
"Six," said Johnsy, in a voice still lower. "They're falling faster now. Three days ago there were almost a hundred. It hurt my head to count them. But now it's easy. There goes another one. There are only five now."
"Five what, dear? Tell your Sue."
"Leaves. On the tree. When the last one falls, I must go, too. I've known that for three days. Didn't the doctor tell you?"
"Oh, I never heard of such a
thing," said Sue. "It doesn't have any sense in it. What
does an old tree have to do with you? Or with your getting well? And you used to love that tree so much. Don't be a little fool. The doctor told me your chances for getting well. He told me this morning. He said you had very good chances! Try to eat a little now. And then I'll go back to work. And then I can sell my picture, and then I can buy something more for you to eat to make you strong."
"You don't have to buy anything for me," said Johnsy. She still looked out the window. "There goes another. No, I don't want anything to eat. Now there are four. I want to see the last one fall before night. Then I'll go, too."
"Johnsy, dear," said Sue, "will you promise me to close your eyes and keep them closed? Will you promise not to look out the window until I finish working? I must have this picture ready tomorrow. I
need the light; I can't cover the
window."
"Couldn't you work in the
other room?" asked Johnsy
coldly.
"I'd rather be here by you," said Sue. "And I don't want you to look at those leaves."
"Tell me as soon as you have finished," said Johnsy. She closed her eyes and lay white and still. "Because I want to see the last leaf fall. I have done enough waiting. I have done enough thinking. I want to go sailing down, down, like one of those leaves."
"Try to sleep," said Sue. "I must call Behrman to come up here. I want to paint a man in this picture, and I'll make him look like Behrman. I won't be gone a minute. Don't try to move till I come back."
Old Behrman was a painter who lived on the first floor of their house. He was past sixty. He had had no success as a painter. For forty years he had painted, without ever painting a good picture. He had always talked of painting a great picture, a masterpiece, but he had never yet started it.
He got a little money by letting others paint pictures of him. He drank too much. He still talked of his great
masterpiece. And he believed that it was his special duty to do everything possible to help Sue and Johnsy.
Sue found him in his dark room, and she knew that he had been drinking. She could smell it. She told him about Johnsy and the leaves on the vine. She said that she was afraid that Johnsy would indeed sail down, down like the leaf. Her hold on the world
was growing weaker.
|
59aac0728ee79d9ea5b43a6076c72749
|
{
"intermediate": 0.3937443494796753,
"beginner": 0.4008992314338684,
"expert": 0.2053564488887787
}
|
48,242
|
after installing payload cms with mongodb, i clicked seed my database and the seed successfully added a test db in my mongo db but i am getting the following error when i load the front end: PS C:\xampp\htdocs\LEMILL\lemill> npm run dev
> lemill@1.0.0 dev
> cross-env PAYLOAD_CONFIG_PATH=src/payload/payload.config.ts nodemon
[nodemon] 2.0.22
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): server.ts
[nodemon] watching extensions: js,ts
[nodemon] starting `ts-node --project tsconfig.server.json src/server.ts -- -I`
[04:44:49] INFO (payload): Connected to MongoDB server successfully!
[04:44:49] INFO (payload): Starting Payload...
[04:44:50] INFO (payload): Payload Admin URL: http://localhost:3000/admin
[04:44:58] INFO (payload): Starting Next.js...
[04:44:58] INFO (payload): Next.js App URL: http://localhost:3000
<i> [webpack-dev-middleware] wait until bundle finished: /__webpack_hmr
○ compiling /page ...
webpack built d58cae1f82f115f20e72 in 29837ms
webpack compiled successfully
✓ Compiled /page in 20.1s (786 modules)
⨯ src\app\_components\Categories\CategoryCard\index.tsx (22:45) @ url
⨯ TypeError: Cannot read properties of null (reading 'url')
at CategoryCard (./src/app/_components/Categories/CategoryCard/index.tsx:26:43)
20 | href="/products"
21 | className={classes.card}
> 22 | style={{ backgroundImage: `url(${media.url})` }}
| ^
23 | onClick={() => setCategoryFilters([category.id])}
24 | >
25 | <p className={classes.title}>{category.title}</p>
<w> [webpack.cache.PackFileCacheStrategy] Caching failed for pack: Error: No serializer registered for ConcatSource
<w> while serializing webpack/lib/util/registerExternalSerializer.webpack-sources/ConcatSource -> Array { 2 items } -> ConcatSource File Code for index.tsx :'use client'
import React from 'react'
import Link from 'next/link'
import { Category } from '../../../../payload/payload-types'
import { useFilter } from '../../../_providers/Filter'
import classes from './index.module.scss'
type CategoryCardProps = {
category: Category
}
const CategoryCard = ({ category }: CategoryCardProps) => {
const media = category.media as Media
const { setCategoryFilters } = useFilter()
return (
<Link
href="/products"
className={classes.card}
style={{ backgroundImage: `url(${media.url})` }}
onClick={() => setCategoryFilters([category.id])}
>
<p className={classes.title}>{category.title}</p>
</Link>
)
}
export default CategoryCard
|
e2e2c145b0da2fc9673e461e8f75d213
|
{
"intermediate": 0.39386358857154846,
"beginner": 0.4371315538883209,
"expert": 0.16900482773780823
}
|
48,243
|
import requests
import random,string
from requests_toolbelt import MultipartEncoder
fields = {
'file': ('test.png', your_data, "image/png"),
'file_id': "0"
}
boundary = '----WebKitFormBoundary' \
+ ''.join(random.sample(string.ascii_letters + string.digits, 16))
m = MultipartEncoder(fields=fields, boundary=boundary)
headers = {
"Host": "xxxx",
"Connection": "keep-alive",
"Content-Type": m.content_type
}
req = requests.post('https://xxxx/api/upload', headers=headers, data=m)
but i also need this
payload = {
"task": "post",
"board": board,
"thread": thread,
"usercode": '',
'code':'',
"captcha_type": "2chcaptcha",
"email": '',
"comment": comment,
"oekaki_image": '',
"oekaki_metadata": '',
"2chcaptcha_value": captcha_answer,
"2chcaptcha_id": captcha_id,
'makaka_id': '',
'makaka_answer': '',
}
|
f6d8eb28057b9901467543ad2b7704e4
|
{
"intermediate": 0.40082359313964844,
"beginner": 0.33882299065589905,
"expert": 0.2603534162044525
}
|
48,244
|
Hi there, please be an expert SAP UI five developer and answer my question with working code examples
|
2d7fe72a2d74c85e79e6e947dcd86bc8
|
{
"intermediate": 0.42310085892677307,
"beginner": 0.40123802423477173,
"expert": 0.1756611317396164
}
|
48,245
|
In this javascript I am displaying an image and text taken from an array called 'stationInfo' is there a way to ensure that the image is displayed but the text is hidden and only displays on mouseover of the div called 'ghostinfo' - 'var Esri_WorldStreetMap = L.tileLayer('https://server.arcgisonline.com/ArcGIS/rest/services/World_Street_Map/MapServer/tile/{z}/{y}/{x}', {
attribution: 'Tiles © Esri'
});
var Esri_WorldImagery = L.tileLayer('https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{z}/{y}/{x}', {
attribution: 'Tiles © Esri'
});
var map = L.map('map', {
center: [25, 0],
zoom: 2,
layers: [Esri_WorldStreetMap]
});
var baseMaps = {
"Road Map": Esri_WorldStreetMap,
"Satellite": Esri_WorldImagery
};
var layerControl = L.control.layers(baseMaps).addTo(map);
var strand = L.latLng(45.431019, 12.334823);
var down = L.latLng(48.8549502, 2.3468266);
var museum = L.latLng(41.8902102, 12.4922309);
var york = L.latLng(51.4994794, -0.1248092);
var brompton = L.latLng(51.0649702, -1.7971855);
var william = L.latLng(48.9402087, 2.2528139);
var mark = L.latLng(43.078697, -79.0763802);
var marys = L.latLng(40.8422517, -73.9304704);
var kent = L.latLng(51.506298, -0.120496);
var marl = L.latLng(50.9190331, 14.0572688);
var molle = L.latLng(41.935556, 12.466944);
var jatte = L.latLng(48.8948193, 2.2664345);
var fuji = L.latLng(35.3628299, 138.7307859);
var five = L.latLng(40.7143643, -74.0005577);
var granada = L.latLng(37.1768988, -3.5897378);
var bruton = L.latLng(37.2713798, -76.7025813);
var warwick = L.latLng(52.2794138, -1.5845975);
var huis = L.latLng(52.0930774, 4.3438645);
var washington = L.latLng(40.7307828, -73.9973284);
var molo = L.latLng(45.4334887, 12.3405578);
var porte = L.latLng(49.2439852, 1.2544189);
var matterhorn = L.latLng(45.976576, 7.6584719);
var center = L.latLng(39.9524397, -75.1637497);
var reine = L.latLng(43.5648263, 4.1942591);
var stage = L.latLng(42.6052657, -70.6781733);
var berlin = L.latLng(52.5170764, 13.4115749);
var rialto = L.latLng(45.4379897, 12.3359142);
var lady = L.latLng(51.2046739, 3.2244486);
var ely = L.latLng(52.3986137, 0.2638014);
var italiens = L.latLng(48.8713205, 2.3364162);
var etretat = L.latLng(49.7073109, 0.1935979);
var parth = L.latLng(37.9715219, 23.7266424);
var padua = L.latLng(45.4098414, 11.8927044);
var munich = L.latLng(48.1582611, 11.5033879);
var forum = L.latLng(40.7492886, 14.4847282);
var teatro = L.latLng(37.8523123, 15.292199);
var taj = L.latLng(27.1749757, 78.0421602);
var capitol = L.latLng(38.8898012, -77.0090292);
var marly = L.latLng(48.8804217, 2.1108704);
var egmond = L.latLng(52.6216328, 4.6545006);
var erupt = L.latLng(40.8213076, 14.4263522);
var hart = L.latLng(51.5589021, 13.0089165);
var tour = L.latLng(52.3720105, 4.905642);
var rouen = L.latLng(49.437191, 1.0913427);
var pirna = L.latLng(50.9615451, 13.9440107);
var sitka = L.latLng(57.0500595, -135.3350632);
var coto = L.latLng(-0.684064, -78.4367661);
var antwerp = L.latLng(51.2203574, 4.4015973);
var windsor = L.latLng(51.483987, -0.60431);
var rock = L.latLng(36.1441334, -5.3421144);
var florence = L.latLng(43.767922, 11.2531284);
var custom = L.latLng(55.9480583, -4.7509405);
var brooklyn = L.latLng(40.6730128, -73.9845876);
var alkmaar = L.latLng(52.632614, 4.743932);
var ghent = L.latLng(51.21264302630933, 3.2404485616302026);
var golden = L.latLng(55.94950293449235, -3.190887963219886);
var ruins = L.latLng(49.3469489361878, 2.9801644318251);
var ranger = L.latLng(40.70613429891641, -73.99669263862219);
var santa = L.latLng(43.76901934731335, 11.250309358895594);
var old = L.latLng(38.97806598531773, -76.48972283733242);
var founding = L.latLng(-33.859206443817655, 151.2115517335217);
var compton = L.latLng(50.472513544702124, -3.6002784478031242);
var eichhorn = L.latLng(49.25689, 16.462614);
var saint = L.latLng(43.92801546810641, 2.1447380095308684);
var notre = L.latLng(48.85302281283721, 2.349916971036055);
var manhattan = L.latLng(40.70717484316477, -73.99054307782178);
var karls = L.latLng(48.198283497151444, 16.37189996806785);
var persen = L.latLng(48.19003718618461, 15.075604058841773);
var wells = L.latLng(51.21035620175577, -2.64347990529991);
var bright = L.latLng(42.04637925090316, -74.11910363873254);
var oxford = L.latLng(51.75279324881308, -1.2536998938233277);
var prague = L.latLng(50.08653902704181, 14.411286392729664);
var bayisland = L.latLng(-35.26174362306738, 174.12125725381634);
var rathaus = L.latLng(47.918124059145576, 13.799400670121077);
var damplatz = L.latLng(52.373127508047965, 4.892463792419649);
var segovia = L.latLng(40.948025956381244, -4.117859686583211);
var vyborg = L.latLng(60.71570989449037, 28.72858365963885);
var cape = L.latLng(-33.925436093239625, 18.42390456546646);
var vilnius = L.latLng(54.678154606053894, 25.286912831678286);
var princes = L.latLng(55.95201455069297, -3.1963202185542468);
var trakai = L.latLng(54.652374680648784, 24.933651103827362);
var palace = L.latLng(52.21505051523167, 21.03580040216378);
var alma = L.latLng(48.86350018067251, 2.3017561919601035);
var spui = L.latLng(52.36827355674235, 4.888875532887837);
var caer = L.latLng(53.13933477642847, -4.276869915968849);
var bow = L.latLng(51.52878304978209, -0.016678761487461734);
var quila = L.latLng(28.608841973461026, 77.2422407564747);
var concorde = L.latLng(37.2896610051625, 13.59205892142171);
var chase = L.latLng(52.58979668480128, -8.870988391690116);
var slottet = L.latLng(50.352737788358674, 7.1798328365536825);
var chaff = L.latLng(49.97604207718476, 9.141663219805931);
var lilles = L.latLng(52.72720592344428, -2.3737330414142708);
var scharf = L.latLng(51.12521481341898, 13.527553439015197);
var marys = L.latLng(50.061604148424856, 19.939603944429326);
var spinner = L.latLng(48.17101144093542, 16.350696456061456);
var grote = L.latLng(52.48589679530798, 4.658556305286563);
var harf = L.latLng(49.50711206547648, 0.19991185150812835);
var oude = L.latLng(52.37438453232728, 4.898123738466199);
var christ = L.latLng(51.279307787801635, 1.0811634973173927);
var pope = L.latLng(51.442106701050115, -0.3313032507828993);
var bavo = L.latLng(51.05298992949669, 3.727187284689531);
var bogor = L.latLng(-6.598023458543918, 106.79732164413238);
var harm = L.latLng(52.0912830939025, 4.96456573829366);
var doge = L.latLng(45.434024629263035, 12.340314304893045);
var durham = L.latLng(54.773398131519045, -1.5762442807490802);
var august = L.latLng(51.055302583979135, 13.73955816351706);
var memlook = L.latLng(30.023514425961885, 31.25934967133141);
var oxford = L.latLng(51.75280785454802, -1.2510336604787498);
var lincoln = L.latLng(53.23428392889462, -0.5361550782144268);
var pantheon = L.latLng(41.89858514019062, 12.476838604931554);
var singel = L.latLng(52.37220071425841, 4.88848065222617);
var moret = L.latLng(48.372607507427574, 2.8190656931110123);
var briar = L.latLng(47.6405648765318, 2.7408497437601294);
var rotund = L.latLng(48.88346086629913, 2.3695714251083335);
var bocca = L.latLng(41.88873461210782, 12.480747969615766);
var port = L.latLng(40.8097656213272, 14.33455477753072);
var rat = L.latLng(47.918123436054714, 13.79941299512479);
var rue = L.latLng(48.86593251830277, 2.3278987624296614);
var cambridge = L.latLng(52.20514181176664, 0.11736769816618857);
var rainy = L.latLng(44.974793223471416, -93.2803792073738);
var haarlem = L.latLng(52.38143591618554, 4.635011559887189);
var brew = L.latLng(52.382282448242506, 4.6403116918953184);
var grapes = L.latLng(51.45021853757792, -0.8565816764646562);
var lviv = L.latLng(49.839268362799345, 24.034193266624985);
var plac = L.latLng(52.24889324994805, 21.007019103549943);
var myse = L.latLng(52.215511753494795, 21.03829430859862);
var amalfi = L.latLng(40.634329093808205, 14.602595196584474);
var llyon = L.latLng(45.75685716685581, 4.838035099192827);
var navona = L.latLng(41.89893660267209, 12.473066386221028);
var asch = L.latLng(49.97605645207857, 9.141627661863916);
var binn = L.latLng(52.07960016844697, 4.313337430369342);
var bello = L.latLng(52.23870761431316, 21.016797327746676);
var column = L.latLng(52.247247395461706, 21.01340040663948);
var prou = L.latLng(48.86243121097536, 2.344599039832925);
var admiral = L.latLng(-34.1898457213882, 18.426505741689887);
var geyser = L.latLng(44.463674562161295, -110.8364805492947);
var kastel = L.latLng(55.691550370464846, 12.594609947234545);
var akard = L.latLng(32.77909835936005, -96.79927957199654);
var roros = L.latLng(62.57729544765129, 11.387894825786224);
var monet = L.latLng(48.85952977780359, 2.341199888871341);
var delft = L.latLng(52.012606093238105, 4.35586516407528);
const station = document.getElementById("station");
const myDiv = document.getElementById("my-div");
const userMarkers = []; // Array to store user-added markers
const nextButton = document.createElement("button");
nextButton.innerText = "Next Painting";
nextButton.id = "buttonsdiv";
nextButton.disabled = true;
nextButton.className = "my-button";
const submitButton = document.createElement("subbutton");
submitButton.innerText = "Submit";
submitButton.id = "buttonsdiv";
submitButton.disabled = true;
submitButton.className = "my-button";
let totalDistance = 0; // Keep track of accumulated distance
let roundDistances = []; // Array to store distance for each round
// Custom user marker icon
const LeafIcon = L.Icon.extend({
options: {
iconSize: [30, 41],
iconAnchor: [15, 40],
},
});
const greenIcon = new LeafIcon({
iconUrl:
"https://cdn.glitch.global/f4cb3da3-e38f-4c57-8538-cd16160b85b3/_265a5cce-4ba0-4c1c-a76f-a7d5f00d8ea0-removebg-preview%20(1).png?v=1705859478915",
});
// Function to run the game with remaining reference points
function generateAndPlay(remainingPoints) {
if (remainingPoints.length === 0) {
// Fit the map to the user-added markers
const bounds = new L.LatLngBounds();
//fitmapbounds ro userMarker
userMarkers.forEach(function (markerLatLng) {
bounds.extend(markerLatLng);
});
map.fitBounds(bounds);
//remove round 5 picture
ghostinfo.innerHTML = "";
// Add the "Play Again" button
const playAgainButton = document.createElement("button");
playAgainButton.id = "playAgain";
playAgainButton.innerText = "Play Again";
playAgainButton.className = "my-button";
// Add click event listener to the button
playAgainButton.addEventListener("click", function () {
//cheating by reloading the browser. Instead this function should reset all variables and remove markers from map
location.reload();
});
document.getElementById("playagain").appendChild(playAgainButton);
// save personal best scores
const personalBest = localStorage.getItem("personalBest");
if (personalBest === null || totalDistance < parseFloat(personalBest)) {
// If personalBest is not set or the current totalDistance is less than the stored personalBest
localStorage.setItem("personalBest", totalDistance.toFixed(2));
}
//display game score
loop(); // ready for some fireworks!
station.innerHTML = `Well done!<br><br> You've found all the paintings! <br><br>
${roundDistances
.map((distance, index) => `Round ${index + 1}: ${distance.toFixed(2)} miles`)
.join("<br>")}<br>
<br>Total distance: ${totalDistance.toFixed(2)} miles.<br>
Personal Best: ${localStorage.getItem("personalBest")} miles.`;
document
.getElementById("station")
.animate(
[
{ transform: "rotate(-10deg)" },
{ transform: "rotate(10deg)" },
{ transform: "rotate(-10deg)" },
{ transform: "rotate(10deg)" },
{ transform: "rotate(-10deg)" },
{ transform: "rotate(10deg)" },
],
{
duration: 1000,
iterations: 1,
}
);
return;
}
const randomIndex = Math.floor(Math.random() * remainingPoints.length);
const referencePoint = remainingPoints[randomIndex];
const roundNumber = Math.ceil(5 - remainingPoints.length + 1); // Calculate round number
station.innerHTML = `Round ${roundNumber}: Drop a marker on the location of ${locationNames[referencePoint]}.<br>`;
ghostinfo.innerHTML = `${stationInfo[referencePoint]}<br>`;
map.off("click"); // Remove previous click event listener
// Function to create the midpoint variable
function createMidpoint(markerLatLng, referencePointLatLng) {
const markerLat = markerLatLng.lat;
const markerLng = markerLatLng.lng;
const referencePointLat = referencePointLatLng.lat;
const referencePointLng = referencePointLatLng.lng;
// Calculate the midpoint's latitude and longitude
const midpointLat = (markerLat + referencePointLat) / 2;
const midpointLng = (markerLng + referencePointLng) / 2;
// Create the midpoint L.latLng object
const midpoint = L.latLng(midpointLat, midpointLng);
return midpoint;
}
var userMarker;
map.on("click", function (e) {
myDiv.innerHTML =
"Click again to change location & click Submit when you are happy";
// Add user marker to the array
if (userMarker) {
map.removeLayer(userMarker); // Remove the previous marker
}
userMarker = L.marker(e.latlng).addTo(map); // Add the new marker
userMarkers.push(userMarker.getLatLng());
//add submitbutton
document.getElementById("buttonsdiv").appendChild(submitButton);
submitButton.onclick = function () {
const marker = L.marker(e.latlng).addTo(map);
const distance = L.latLng(e.latlng).distanceTo(referencePoint);
map.off("click");
// Create a bounds object encompassing both markers
const bounds = L.latLngBounds([e.latlng, referencePoint]);
// Zoom the map to fit those bounds
map.fitBounds(bounds);
//remove submit button and add next painting button
document.getElementById("buttonsdiv").appendChild(nextButton);
document.getElementById("buttonsdiv").removeChild(submitButton);
// Convert meters to miles:
const distanceInMiles = distance * 0.000621371;
myDiv.innerHTML = `You clicked ${distanceInMiles.toFixed(2)} miles from ${
locationNames[referencePoint]
}`;
// Create the midpoint variable and display message
const midpoint = createMidpoint(e.latlng, referencePoint);
const popup = L.popup()
.setLatLng(midpoint)
.setContent(
distanceInMiles < 0.5
? "Perfect"
: distanceInMiles < 2
? "Very Good"
: distanceInMiles < 10
? "At least you got the right city"
: distanceInMiles < 100
? "Close - ish"
: "Way off!" // Default message for distances 100 miles or more
)
.openOn(map);
// Update total distance with clicked marker's distance
totalDistance += distanceInMiles;
roundDistances.push(distanceInMiles); // Add distance to the roundDistances array
// connect user marker to correct location
const polyline = L.polyline([e.latlng, referencePoint], {
color: "black",
}).addTo(map);
// Put marker on correct location
const stationMarker = L.marker(referencePoint, { icon: greenIcon }).addTo(
map
);
// Remove the used reference point from the remaining pool
remainingPoints.splice(randomIndex, 1);
};
});
// Enable next button when a new game round starts
nextButton.disabled = false;
// Handle next button click
nextButton.onclick = function () {
//remove popup message
map.closePopup();
// Change button text to "Results" on the fifth question
if (roundNumber === 4) {
nextButton.innerText = "Results";
}
//remove next button and add submit painting button
document.getElementById("buttonsdiv").removeChild(nextButton);
map.setView([25, 0], 2);
document
.getElementById("station")
.animate(
[
{ transform: "translateX(-3px)" },
{ transform: "translateX(3px)" },
{ transform: "translateX(-3px)" },
{ transform: "translateX(3px)" },
{ transform: "translateX(-3px)" },
{ transform: "translateX(3px)" },
],
{
duration: 1000,
iterations: 1,
}
);
generateAndPlay(remainingPoints);
myDiv.innerHTML = "Click on the map";
};
}
// Use map to determine location name according to the chosen reference point
const locationNames = {
[strand]: "View of the Grand Canal",
[down]: "A View of Paris with the Ile de la Cité",
};
const stationInfo = {
[strand]:
'<img src="https://cdn.glitch.global/f4cb3da3-e38f-4c57-8538-cd16160b85b3/ef1f442c-24bf-40c8-a82b-e89b71c66ecf_3000.jpg?v=1704458402437" onclick="this.requestFullscreen()" class="center" alt="View of the Grand Canal" width="400"> View of the Grand Canal: Santa Maria della Salute and the Dogana from Campo Santa Maria Zobenigo (c1743) by Bernardo Bellotto. <br><br> From the <a href=\'https://www.getty.edu/art/collection/object/103RJP#full-artwork-details\'>Getty\'s Collection Online</a>.',
[down]:
'<img src="https://cdn.glitch.global/f4cb3da3-e38f-4c57-8538-cd16160b85b3/ef390b6c-6021-4738-92fe-2338c38cc2af_3000.jpg?v=1704459300215" onclick="this.requestFullscreen()" class="center" alt="A View of Paris with the Ile de la Cité" width="400"> A View of Paris with the Ile de la Cité (c1763) by Jean-Baptiste Raguenet. <br><br> From the <a href=\'https://www.getty.edu/art/collection/object/103RBA\'>Getty\'s Collection Online</a>.',
};
// Start the game with all reference points
function toPlay() {
playbutton.remove();
const shuffledEntries = [
strand,
down,
]
//select 5 random pictures
.slice()
.sort(() => Math.random() - 0.5); // Shuffle using Fisher-Yates
const randomEntries = shuffledEntries.slice(0, 5);
generateAndPlay(randomEntries);
myDiv.innerHTML = "Click on the map";
}
function addMarkers(map) {
var markers = [
strand,
manhattan,
];
for (var i = 0; i < markers.length; i++) {
var marker = L.marker(markers[i], {
icon: greenIcon,
referencePoint: markers[i]
});
marker.addTo(map).on('click', function() {
var markerKey = this.options.referencePoint;
var correctContent = stationInfo[markerKey];
document.getElementById('ghostinfo').innerHTML = correctContent + '<br>';
});
}
}
var mapSequence = [];
document.addEventListener("keydown", function (event) {
mapSequence.push(event.key);
if (mapSequence.length === 3 && mapSequence.join("") === "map") {
event.preventDefault();
mapSequence = [];
addMarkers(map);
} else if (mapSequence.length > 3) {
mapSequence = [];
}
});
'
|
4d3a97c6f6f738ac7399a3653718e473
|
{
"intermediate": 0.2894408106803894,
"beginner": 0.356118768453598,
"expert": 0.3544404208660126
}
|
48,246
|
i need to download a file using arai2c in jupyter notebook. the download link = https://civitai.com/api/download/models/403131?type=Model&format=SafeTensor&size=full&fp=fp16
|
2bcdc50236756d01536d8a22fde74860
|
{
"intermediate": 0.29082682728767395,
"beginner": 0.14985540509223938,
"expert": 0.5593177676200867
}
|
48,247
|
you are a crypto trader expert. what are the most profitable coins in the next year?
|
dd386f38416a4ac94146ff99e4286878
|
{
"intermediate": 0.3755163848400116,
"beginner": 0.2706845998764038,
"expert": 0.35379907488822937
}
|
48,248
|
How to create step by step a real-time charting and analysis of cryptocurrency markets like trading view in python?
|
de4c80aa1baefdcd60eb0105c6a08d19
|
{
"intermediate": 0.5471646785736084,
"beginner": 0.07663106173276901,
"expert": 0.3762042820453644
}
|
48,249
|
Help me understand the following code which is delimited by triple parentheses and is as follows:
'''
import tkinter as tk
from tkinter import simpledialog
def add(a, b):
return(a + b)
root = tk.Tk()
root.withdraw() # Hide the main window
x = simpledialog.askinteger("Input", "Enter the first number", parent=root)
y = simpledialog.askinteger("Input", "Enter the second number", parent=root)
result = add(x, y)
tk.messagebox.showinfo("Result", f"The result is {result}")
root.mainloop() # Start the event loop
'''
Here is some additional context:
'''
-I am a beginner who is learning Python
-This is my first project
-I want to begin building my intuition for coding through this project
-My goal is to fully understand what is going on in that code, and the undeerlying principles and or rules which I will need to know as I progress forward and learn more to improve my skills with Python.
'''
Your response should be tailored to my particular situation, goals, and knowledge level.
|
6f20d5e1277e2ff21b2c4690d6b1c6d4
|
{
"intermediate": 0.39430978894233704,
"beginner": 0.43555748462677,
"expert": 0.17013263702392578
}
|
48,250
|
write a python code for a real-time charting and analysis of cryptocurrency markets like trading view
|
f5bf41f7fde636eb1892a5e445459f2b
|
{
"intermediate": 0.4430326819419861,
"beginner": 0.10469849407672882,
"expert": 0.4522688388824463
}
|
48,251
|
У меня есть сайт имеющий следующую структуру:
db.php - код подключающий к базе данных
index.php - логика того, что отображено на главной странице
login.php - код проверяет есть ли в базе данных аккаунт, выполняет логику логина
register.php - код выполняет логику регистрации, записывает данные пользователя в базу
Предоставляю код страниц:
db.php:
<?php // Этот код конектит к базе данных
$servername = "localhost";
$username = "root";
$password = "";
$dbname = "registerUser";
$conn = mysqli_connect($servername, $username, $password, $dbname);
if(!$conn){
die("Connection to db field". mysqli_connect_error());
} else {
"Успех";
} ?>
index.php:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<form action="register.php" method="post">
<input type="text" placeholder="login" name="login">
<input type="text" placeholder="password" name="pass">
<input type="text" placeholder="repeat password" name="repeatpass">
<input type="text" placeholder="email" name="email">
<button type="submit">Зарегестрироваться</button>
</form>
<form action="login.php" method="post">
<input type="text" placeholder="login" name="login">
<input type="text" placeholder="password" name="pass">
<button type="submit">Войти</button>
</form>
</body>
</html>
login.php:
<?php // Этот код проверяет есть ли в базе данных аккаунт, выполняет логику логина
require_once('db.php');
$login = $_POST['login'];
$pass = $_POST['pass'];
if (empty($login) || empty($pass))
{
echo "Заполните все поля";
} else {
$sql = "SELECT * FROM `users` WHERE login = '$login' AND pass = '$pass'";
$result = $conn->query($sql);
if ($result->num_rows > 0)
{
while($row = $result->fetch_assoc()){
echo "Добро пожаловать " . $row['login'];
}
} else {
echo "Нет такого пользователя";
}
}
register.php:
<?php // Этот код выполняет логику регистрации, записывает данные пользователя в базу
require_once('db.php');
$login = $_POST ['login'];
$pass = $_POST ['pass'];
$repeatpass = $_POST ['repeatpass'];
$email = $_POST ['email'];
if (empty($login) || empty($pass) || empty($repeatpass) || empty($email)){
echo "Заполните все поля";
} else
{
if($pass != $repeatpass){
echo "Несоответствие паролей";
} else
{
$sql = "INSERT INTO `users` (login, pass, email) VALUES ('$login', '$pass', '$email')";
if($conn -> query($sql) === TRUE){
echo "Успешная регистрация";
}
else {
echo "Ошибка: " .$conn->error;
}
}
}
Помоги мне разработать profile.php, где пользователь сможет управлять деталями своего профиля. Ответь в виде кода.
|
0fd94a1dd86cddc33fdf87ef166cf25b
|
{
"intermediate": 0.2825857698917389,
"beginner": 0.5682012438774109,
"expert": 0.1492130160331726
}
|
48,252
|
i'm writing a backend using Rust to interact with solana on-chain program, the on-chain program is writed by Anchor framework, and each method in it has a ctx: Context parameter, so how can i do it, i'm stocking
|
260b84ac023d9d2ee8eac2f9ff222e8c
|
{
"intermediate": 0.7660755515098572,
"beginner": 0.09804768115282059,
"expert": 0.13587680459022522
}
|
48,253
|
i send a post rerqurst using python requests lib.
these are headers that i sent:
headers = {
'Authority': '2ch.hk',
# 'Method': 'POST',
'path':'user/posting?nc=1',
'scheme':'https',
'Accept': self.headers.get_accept(),
'Accept-Encoding': self.headers.get_encoding(),
'Accept-Language': self.headers.get_accept_language(),
# 'Content-Type': content_type,
'Origin': 'https://2ch.hk',
'Referer': f'https://2ch.hk/{self.board}/res/{self.thread_id}.html',
'Sec-Ch-Ua': self.headers.get_sec_ch_ua(),
'Sec-Ch-Ua-Mobile': '?0',
'Sec-Ch-Ua-Platform': self.headers.get_sec_ch_ua_platform(),
'Sec-Fetch-dest': 'empty',
'Sec-Fetch-mode': 'cors',
'Sec-Fetch-site': 'same-origin',
'User-Agent': self.headers.get_user_agent(),
'X-Requested-With': 'XMLHttpRequest'}
but they are being sent as:
{
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36",
"Accept-Encoding": "gzip, deflate",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Connection": "keep-alive",
"Authority": "2ch.hk",
"path": "user/posting?nc=1",
"scheme": "https",
"Accept-Language": "en-US,ru;q=0.8",
"Origin": "https://2ch.hk",
"Referer": "https://2ch.hk/fag/res/22202501.html",
"Sec-Ch-Ua": "Google Chrome;v=\"122\", \"Chromium\";v=\"122\", \";Not A Brand\";v=\"24\"",
"Sec-Ch-Ua-Mobile": "?0",
"Sec-Ch-Ua-Platform": "Windows",
"Sec-Fetch-dest": "empty",
"Sec-Fetch-mode": "cors",
"Sec-Fetch-site": "same-origin",
"X-Requested-With": "XMLHttpRequest",
"Content-Type": "multipart/form-data; boundary=----WebKitFormBoundaryHeLs1G4S5blwXjYI",
"Cookie": "_ga=GA1.2.1960985071.1714299084; _ga_7NPYTX0FY3=GS1.2.1714299084.1.1.1714299095.0.0.0; _gat=1; _gid=GA1.2.1742735091.1714299084;
cf_clearance=VWownfE_LQ.N1qpLC3Z1i072XESRo2N3Mh6Cr1xc0S4-1714299084-1.0.1.1-B8cw7bN5LbrWubftmPj2VxTtpcE5Qb1i_.TH3tH1H2BvhWJgFpFKadEa5YkPHLkf42Lhx
OXBCFtg0bOYxMmBTg; gnezdo_uid=uZQlT2YuIMx8DzwvA9E4Ag==",
"Content-Length": "1809"
how to send them exacltly in order that i set them?
|
95729c406479dd6df777235bac4c51a6
|
{
"intermediate": 0.30290254950523376,
"beginner": 0.5067378878593445,
"expert": 0.19035950303077698
}
|
48,254
|
The running time of Radix sort is effectively independent of whether the input is
already sorted True or false
|
aff026b61c22da6b72a015b0a5a0e784
|
{
"intermediate": 0.3060157001018524,
"beginner": 0.14518052339553833,
"expert": 0.5488037467002869
}
|
48,255
|
I am working with the unity game engine and I am wondering what is the default string key for Input middle mouse button
|
d7018294ef1115c371c0c229f263a0b8
|
{
"intermediate": 0.356862872838974,
"beginner": 0.3578234016895294,
"expert": 0.2853137254714966
}
|
48,256
|
import {NavigationContainer} from "@react-navigation/native";import { MaterialIcons } from '@expo/vector-icons';
import React from "react";
import MyTabs from "./screens/Tabs";
import TopBar from "./screens/TopBar";
import { createNativeStackNavigator } from '@react-navigation/native-stack';
import Account from "./screens/Account";
import Dashboard from "./screens/Dashboard";
const Stack = createNativeStackNavigator();
export default function App() {
return (
<NavigationContainer>
<TopBar />
<MyTabs />
</NavigationContainer>
);
}
|
50dd67558e00a5f269f4eead053f0607
|
{
"intermediate": 0.42508769035339355,
"beginner": 0.30531415343284607,
"expert": 0.2695981562137604
}
|
48,257
|
Hi there, please be a senior sapui5 developer and answer my question with working code examples.
|
fcceb3897b962c714dc20b27cc5c9790
|
{
"intermediate": 0.4102480113506317,
"beginner": 0.2819591760635376,
"expert": 0.3077927827835083
}
|
48,258
|
how read excel file
|
9d8ea374a276d7ba2da4609acdadd1c5
|
{
"intermediate": 0.27674517035484314,
"beginner": 0.3021402657032013,
"expert": 0.42111459374427795
}
|
48,259
|
howto get a history of this data: url = f'https://api.coingecko.com/api/v3/simple/price?ids={crypto_id}&vs_currencies=usd'
|
e1a85976357737a69806f92abcf7645a
|
{
"intermediate": 0.5590550899505615,
"beginner": 0.20415377616882324,
"expert": 0.23679107427597046
}
|
48,260
|
i have 1600 game states, each is 512 numbers, how to make model that will predict all cheaters in game
|
eff3dfa2bc653317c591da7c0f5c6e2b
|
{
"intermediate": 0.1915033906698227,
"beginner": 0.18270234763622284,
"expert": 0.6257942318916321
}
|
48,261
|
Create a full blown rigid body physics simulation where different shapes fall down due to gravity and hit a platform , the shapes should be able to collide with each other as well
|
1d91cd2b5694abb931546671ad92908d
|
{
"intermediate": 0.40452155470848083,
"beginner": 0.3178565204143524,
"expert": 0.27762192487716675
}
|
48,262
|
this.http.get(environment.url+"orders?",{
params:{
pageNo:pageNo,
pageSize:pageSize
}
} i have this api, but i want to show this params in browser url
|
c48e6123f467d9ff1eeaedf56907ed78
|
{
"intermediate": 0.6883649230003357,
"beginner": 0.174725741147995,
"expert": 0.1369093656539917
}
|
48,263
|
make this interactive : # Convert to DataFrame
df = pd.DataFrame(data["prices"], columns=["timestamp", "price"])
df['date'] = pd.to_datetime(df['timestamp'], unit='ms')
# Plot using Plotly
fig = go.Figure()
fig.add_trace(go.Scatter(x=df['date'], y=df['price'], mode='lines+markers', name='Price'))
fig.update_layout(title=f'{crypto_id.capitalize()} Price (Last {days} Days)', xaxis_title='Date', yaxis_title='Price in USD')
# Show plot
fig.show()
|
c8047da4603c6f8c7f5efeff5b33de46
|
{
"intermediate": 0.38546499609947205,
"beginner": 0.2602955400943756,
"expert": 0.35423940420150757
}
|
48,264
|
write a grid search function for theses nn,
from keras.layers import LSTM, Input, Dense X_train =X_train.reshape(5000,32,24) X_test = X_test.reshape(1250,32,24) model = keras.Sequential([ Input(shape=(32,24)), LSTM(128, return_sequences=True,activation='relu'), Dropout(0.2), LSTM(64, activation='relu'), Dropout(0.2), Dense(20, activation='relu'), Dense(1, activation='sigmoid') ]) model.compile(optimizer='adam', loss='mse', metrics=['accuracy']) model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_test, y_test), callbacks=[keras.callbacks.EarlyStopping(monitor='val_loss', patience=4)],verbose=1) from keras.layers import Conv1D, Input, Dense,MaxPooling1D, Flatten, Dropout X_train =X_train.reshape(5000,32,24) X_test = X_test.reshape(1250,32,24) model = keras.Sequential([ Input(shape=(32,24)), Conv1D(32, 5, activation='relu'), Dropout(0.2), MaxPooling1D(2), Conv1D(32, 5, activation='relu'), Dropout(0.2), MaxPooling1D(2), Flatten(),
]) model.compile(optimizer='adam', loss='mse', metrics=['accuracy']) model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_test, y_test), callbacks=[keras.callbacks.EarlyStopping(monitor='val_loss', patience=3)],verbose=1)
|
a25930fb37dd4fbe1889e795c83dd6da
|
{
"intermediate": 0.3099301755428314,
"beginner": 0.19990359246730804,
"expert": 0.4901663064956665
}
|
48,265
|
hi
|
757d85eb40587253f071ed0f9677c1c5
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
48,266
|
请解释以下代码:import { DEFAULT_CHATBOTS } from '../consts'
import { GradioBot } from './gradio'
export type BotName = string
export function createBotInstance(botName: string) {
const bot = DEFAULT_CHATBOTS.find(bot => bot.name === botName)
if (!bot) {
console.error('use defalt model');
}
return new GradioBot(bot?.url!)
}
export type BotInstance = ReturnType<typeof createBotInstance>
|
4ff9b0618f5e83dc1c9c9b1f4a16f277
|
{
"intermediate": 0.37428438663482666,
"beginner": 0.3528155982494354,
"expert": 0.2728999853134155
}
|
48,267
|
pls modify my function to print a histogram comparing each metric:
from sklearn.metrics import precision_recall_fscore_support
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import classification_report
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
import matplotlib.pyplot as plt
import numpy as np
def compare_classifiers(X_train, X_test, y_train, y_test,method='tfidf'):
classifiers_tfid = [
('Logisti Regression', LogisticRegression(random_state=42)),
('Gaussian Naive Bayes', MultinomialNB()),
('Rando Forest', RandomForestClassifier(n_estimators=100, random_state=42)),
('KNN', KNeighborsClassifier(n_neighbors=5)),
#('Neural Network', model),
]
classifiers_embeddings =[
('RNN', modelRNN),
('CNN', modelCNN),
('NN', modelNN),
('Logisti Regression', LogisticRegression(random_state=42)),
('Gaussian Naive Bayes', GaussianNB()),
('Rando Forest', RandomForestClassifier(n_estimators=100, random_state=42)),
('KNN', KNeighborsClassifier(n_neighbors=5)),
]
if method == 'tfidf':
classifiers = classifiers_tfid
else:
classifiers = classifiers_embeddings
results = {}
for name, clf in classifiers:
if name == 'RNN' or name == 'CNN' or name == 'NN':
if method == 'tfidf':
continue
else:
y_pred = train_nn(clf,X_train, y_train, X_test, y_test)
else:
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
precision, recall, fscore, _ = precision_recall_fscore_support(y_test, y_pred, average='weighted')
if name not in results:
results[name] = {'precision': [], 'recall': [], 'fscore': []}
results[name]['precision'].append(precision)
results[name]['recall'].append(recall)
results[name]['fscore'].append(fscore)
return results
# Initialize a dictionary to hold all results
def compare_seeds(train_test_sets, method='tfidf'):
all_seeds_results = {}
for i, (X_train, X_test, y_train, y_test) in enumerate(train_test_sets):
print(f"---- Random Seed: {i} ----")
if method == 'tfidf':
X_train, X_test = tfidf_features(X_train, X_test)
else:
X_train, X_test = embeddings_features(X_train, X_test)
seed_results = compare_classifiers(X_train, X_test, y_train, y_test,method)
for clf_name, metrics in seed_results.items():
if clf_name not in all_seeds_results:
all_seeds_results[clf_name] = {'precision': [], 'recall': [], 'fscore': []}
# Accumulate results
for metric_name, metric_values in metrics.items():
all_seeds_results[clf_name][metric_name] += metric_values
# Compute and print averages
for clf_name, metrics in all_seeds_results.items():
print(f'----- {clf_name} Average Metrics -----')
for metric_name, metric_values in metrics.items():
print(f'{metric_name.capitalize()} Avg: {np.mean(metric_values):.4f}')
print('\n')
|
278fbbf1855774463b94c3eb5c83e535
|
{
"intermediate": 0.38455167412757874,
"beginner": 0.4250328242778778,
"expert": 0.19041545689105988
}
|
48,268
|
Explain Iterator object in Python shortly
|
dc88100a93e5821eaa1b0056f0cc5324
|
{
"intermediate": 0.5745020508766174,
"beginner": 0.16799470782279968,
"expert": 0.2575032413005829
}
|
48,269
|
update invoices set TaxRate = (select (TotalDetails-extradiscountamount) *(SELECT TaxRate/100 FROM settings where id = 1 ) from invoices as inv where inv.id = invoices.id ) where invoices.Invoice1 in (8340,8518,8757,8259,8236,7783,8345,7447,7628,8608,8515,7742,8519,8029,8652,8622,8113,8327,8466,7640,8182,7495,8374,7811,7774,8403,8707,8122,7984,8372,7682,8254,8667,7500,7885,7886,8093,8261,8565,8572,8263,8121,8125,7525,8149,195,196,202,206,210,213,222,230,232,233,236,247,248,254,262,8620,284,286,287,8116,194,8089,8635,8730,212,288,7794,8732,8111,8764,8228,7874,8180,8440,8760,7613,8621,8112);
#1093 - You can't specify target table 'invoices' for update in FROM clause
ما هذا الخطا وكيف يمكن حله
|
815752346380f55ad2c0a929fc8e5c7b
|
{
"intermediate": 0.31335416436195374,
"beginner": 0.36938655376434326,
"expert": 0.3172592520713806
}
|
48,270
|
we have a semester project
"As urban traffic congestion continues to pose significant challenges to efficient transportation systems, this abstract explores a dynamic approach to route planning that adapts to real-time traffic conditions. The research focuses on the development of an algorithm that dynamically computes the next shortest path for vehicles, taking into account current traffic conditions. Leveraging real-time data from traffic sensors, GPS devices, and historical traffic patterns, the algorithm continuously updates and optimizes route recommendations, ensuring drivers are directed along the most time-efficient paths. The study aims to enhance overall traffic flow, reduce travel time, and minimize congestion-related environmental impacts. The outcomes of this research offer a promising avenue for the integration of adaptive navigation systems, contributing to the optimization of urban mobility and providing a tangible solution for contemporary transportation challenges."
this is just the official text, do not use this as your actual prompt data, this is just to give you a baseline understanding
i would like to prompt engineer my way into completeing this projext
first i need a code that generates a bunch of random nodes with distances and finds the shortest path using dijkstras algorithm, this simulation needs to be done graphically in python, also tell me what all libraries id need along with the code
i need you to randmly generate a semi connected graph (not all nodes have edges), dont need to number all nodes but do assign a source and destination, also print weight costs
what i need you to do is to print the graph and then print the path taken live/dynamically, use simpy or any simulation libraries you need
|
00a368a66612a012f2c611e1da659429
|
{
"intermediate": 0.3801163136959076,
"beginner": 0.05537626892328262,
"expert": 0.5645074248313904
}
|
48,271
|
callbacks=[LRA(model=model,patience=patience,stop_patience=stop_patience, threshold=threshold,
factor=factor,dwell=dwell,model_name= model_name, freeze=freeze, initial_epoch=0 )] ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[26], line 8
6 dwell=True # experimental, if True and monitored metric does not improve on current epoch set modelweights back to weights of previous epoch
7 freeze=False # if true free weights of the base model
----> 8 callbacks=[LRA(model=model,patience=patience,stop_patience=stop_patience, threshold=threshold,
9 factor=factor,dwell=dwell,model_name= model_name, freeze=freeze, initial_epoch=0 )]
10 LRA.tepochs=epochs # used to determine value of last epoch for printing
11 history=model.fit(x=train_gen, epochs=epochs, callbacks=callbacks, verbose=0, validation_data=valid_gen,
12 validation_steps=None, shuffle=False, initial_epoch=0)
Cell In[25], line 8, in LRA.__init__(self, model, patience, stop_patience, threshold, factor, dwell, model_name, freeze, initial_epoch)
6 def __init__(self,model, patience,stop_patience, threshold, factor, dwell, model_name, freeze, initial_epoch):
7 super(LRA, self).__init__()
----> 8 self.model=model
9 self.patience=patience # specifies how many epochs without improvement before learning rate is adjusted
10 self.stop_patience=stop_patience
AttributeError: can't set attribute 'model'
|
d59494c1d7db96fd5dbe3cf851fbe600
|
{
"intermediate": 0.437627375125885,
"beginner": 0.2951623797416687,
"expert": 0.2672102153301239
}
|
48,272
|
class LRA(keras.callbacks.Callback):
reset=False
count=0
stop_count=0
tepochs=0
def __init__(self,model, patience,stop_patience, threshold, factor, dwell, model_name, freeze, initial_epoch):
super(LRA, self).__init__()
self.model=model
self.patience=patience # specifies how many epochs without improvement before learning rate is adjusted
self.stop_patience=stop_patience
self.threshold=threshold # specifies training accuracy threshold when lr will be adjusted based on validation loss
self.factor=factor # factor by which to reduce the learning rate
self.dwell=dwell
self.lr=float(tf.keras.backend.get_value(model.optimizer.lr)) # get the initiallearning rate and save it in self.lr
self.highest_tracc=0.0 # set highest training accuracy to 0
self.lowest_vloss=np.inf # set lowest validation loss to infinity
#self.count=0 # initialize counter that counts epochs with no improvement
#self.stop_count=0 # initialize counter that counts how manytimes lr has been adjustd with no improvement
self.initial_epoch=initial_epoch
#self.epochs=epochs
best_weights=self.model.get_weights() # set a class vaiable so weights can be loaded after training is completed
msg=' '
if freeze==True:
msgs=f' Starting training using base model { model_name} with weights frozen to imagenet weights initializing LRA callback'
else:
msgs=f' Starting training using base model { model_name} training all layers '
print_in_color (msgs, (244, 252, 3), (55,65,80))
def on_epoch_begin(self,epoch, logs=None):
self.now= time.time()
def on_epoch_end(self, epoch, logs=None): # method runs on the end of each epoch
later=time.time()
duration=later-self.now
if epoch== self.initial_epoch or LRA.reset==True:
LRA.reset=False
msg='{0:^8s}{1:^10s}{2:^9s}{3:^9s}{4:^9s}{5:^9s}{6:^9s}{7:^11s}{8:^8s}'.format('Epoch', 'Loss', 'Accuracy','V_loss','V_acc', 'LR', 'Next LR', 'Monitor', 'Duration')
print_in_color(msg, (244,252,3), (5,165,80))
lr=float(tf.keras.backend.get_value(self.model.optimizer.lr)) # get the current learning rate
current_lr=lr
v_loss=logs.get('val_loss') # get the validation loss for this epoch
acc=logs.get('accuracy') # get training accuracy
v_acc=logs.get('val_accuracy')
loss=logs.get('loss')
#print ( '\n',v_loss, self.lowest_vloss, acc, self.highest_tracc)
if acc < self.threshold: # if training accuracy is below threshold adjust lr based on training accuracy
monitor='accuracy'
if acc>self.highest_tracc: # training accuracy improved in the epoch
self.highest_tracc=acc # set new highest training accuracy
LRA.best_weights=self.model.get_weights() # traing accuracy improved so save the weights
self.count=0 # set count to 0 since training accuracy improved
self.stop_count=0 # set stop counter to 0
if v_loss<self.lowest_vloss:
self.lowest_vloss=v_loss
color= (0,255,0)
self.lr=lr
else:
# training accuracy did not improve check if this has happened for patience number of epochs
# if so adjust learning rate
if self.count>=self.patience -1:
color=(245, 170, 66)
self.lr= lr* self.factor # adjust the learning by factor
tf.keras.backend.set_value(self.model.optimizer.lr, self.lr) # set the learning rate in the optimizer
self.count=0 # reset the count to 0
self.stop_count=self.stop_count + 1
if self.dwell:
self.model.set_weights(LRA.best_weights) # return to better point in N space
else:
if v_loss<self.lowest_vloss:
self.lowest_vloss=v_loss
else:
self.count=self.count +1 # increment patience counter
else: # training accuracy is above threshold so adjust learning rate based on validation loss
monitor='val_loss'
if v_loss< self.lowest_vloss: # check if the validation loss improved
self.lowest_vloss=v_loss # replace lowest validation loss with new validation loss
LRA.best_weights=self.model.get_weights() # validation loss improved so save the weights
self.count=0 # reset count since validation loss improved
self.stop_count=0
color=(0,255,0)
self.lr=lr
else: # validation loss did not improve
if self.count>=self.patience-1:
color=(245, 170, 66)
self.lr=self.lr * self.factor # adjust the learning rate
self.stop_count=self.stop_count + 1 # increment stop counter because lr was adjusted
self.count=0 # reset counter
tf.keras.backend.set_value(self.model.optimizer.lr, self.lr) # set the learning rate in the optimizer
if self.dwell:
self.model.set_weights(LRA.best_weights) # return to better point in N space
else:
self.count =self.count +1 # increment the patience counter
if acc>self.highest_tracc:
self.highest_tracc= acc
msg=f'{str(epoch+1):^3s}/{str(LRA.tepochs):4s} {loss:^9.3f}{acc*100:^9.3f}{v_loss:^9.5f}{v_acc*100:^9.3f}{current_lr:^9.5f}{self.lr:^9.5f}{monitor:^11s}{duration:^8.2f}'
print_in_color (msg,(244,252,3), (55,65,80))
if self.stop_count> self.stop_patience - 1: # check if learning rate has been adjusted stop_count times with no improvement
msg=f' training has been halted at epoch {epoch + 1} after {self.stop_patience} adjustments of learning rate with no improvement'
print_in_color(msg, (0,255,0), (55,65,80))
self.model.stop_training = True # stop training
add Codeadd Markdown
epochs =10
patience=9 # number of epochs to wait to adjust lr if monitored value does not improve
stop_patience =3 # number of epochs to wait before stopping training if monitored value does not improve
threshold=.9 # if train accuracy is < threshhold adjust monitor accuracy, else monitor validation loss
factor=.5 # factor to reduce lr by
dwell=True # experimental, if True and monitored metric does not improve on current epoch set modelweights back to weights of previous epoch
freeze=False # if true free weights of the base model
callbacks=[LRA(model=model,patience=patience,stop_patience=stop_patience, threshold=threshold,
factor=factor,dwell=dwell,model_name= model_name, freeze=freeze, initial_epoch=0 )]
LRA.tepochs=epochs # used to determine value of last epoch for printing
history=model.fit(x=train_gen, epochs=epochs, callbacks=callbacks, verbose=0, validation_data=valid_gen,
validation_steps=None, shuffle=False, initial_epoch=0)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[37], line 8
6 dwell=True # experimental, if True and monitored metric does not improve on current epoch set modelweights back to weights of previous epoch
7 freeze=False # if true free weights of the base model
----> 8 callbacks=[LRA(model=model,patience=patience,stop_patience=stop_patience, threshold=threshold,
9 factor=factor,dwell=dwell,model_name= model_name, freeze=freeze, initial_epoch=0 )]
10 LRA.tepochs=epochs # used to determine value of last epoch for printing
11 history=model.fit(x=train_gen, epochs=epochs, callbacks=callbacks, verbose=0, validation_data=valid_gen,
12 validation_steps=None, shuffle=False, initial_epoch=0)
Cell In[35], line 8, in LRA.__init__(self, model, patience, stop_patience, threshold, factor, dwell, model_name, freeze, initial_epoch)
6 def __init__(self,model, patience,stop_patience, threshold, factor, dwell, model_name, freeze, initial_epoch):
7 super(LRA, self).__init__()
----> 8 self.model=model
9 self.patience=patience # specifies how many epochs without improvement before learning rate is adjusted
10 self.stop_patience=stop_patience
AttributeError: can't set attribute 'model'
|
51f60ad4aa4df07c76563f2c89702fff
|
{
"intermediate": 0.23157449066638947,
"beginner": 0.6338568329811096,
"expert": 0.13456861674785614
}
|
48,273
|
对比:代码① import math
import logging
from functools import partial
from collections import OrderedDict
from copy import deepcopy
import torch
import torch.nn as nn
import torch.nn.functional as F
from timm.models.layers import to_2tuple
from lib.models.layers.patch_embed import PatchEmbed, PatchEmbed_event, xcorr_depthwise
from .utils import combine_tokens, recover_tokens
from .vit import VisionTransformer
from ..layers.attn_blocks import CEBlock
_logger = logging.getLogger(__name__)
class VisionTransformerCE(VisionTransformer):
""" Vision Transformer with candidate elimination (CE) module
A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale`
- https://arxiv.org/abs/2010.11929
Includes distillation token & head support for `DeiT: Data-efficient Image Transformers`
- https://arxiv.org/abs/2012.12877
"""
def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12,
num_heads=12, mlp_ratio=4., qkv_bias=True, representation_size=None, distilled=False,
drop_rate=0., attn_drop_rate=0., drop_path_rate=0., embed_layer=PatchEmbed, norm_layer=None,
act_layer=None, weight_init='',
ce_loc=None, ce_keep_ratio=None):
"""
Args:
img_size (int, tuple): input image size
patch_size (int, tuple): patch size
in_chans (int): number of input channels
num_classes (int): number of classes for classification head
embed_dim (int): embedding dimension
depth (int): depth of transformer
num_heads (int): number of attention heads
mlp_ratio (int): ratio of mlp hidden dim to embedding dim
qkv_bias (bool): enable bias for qkv if True
representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set
distilled (bool): model includes a distillation token and head as in DeiT models
drop_rate (float): dropout rate
attn_drop_rate (float): attention dropout rate
drop_path_rate (float): stochastic depth rate
embed_layer (nn.Module): patch embedding layer
norm_layer: (nn.Module): normalization layer
weight_init: (str): weight init scheme
"""
# super().__init__()
super().__init__()
if isinstance(img_size, tuple):
self.img_size = img_size
else:
self.img_size = to_2tuple(img_size)
self.patch_size = patch_size
self.in_chans = in_chans
self.num_classes = num_classes
self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
self.num_tokens = 2 if distilled else 1
norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
act_layer = act_layer or nn.GELU
self.patch_embed = embed_layer(
img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim)
num_patches = self.patch_embed.num_patches
self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
self.dist_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if distilled else None
self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + self.num_tokens, embed_dim))
self.pos_drop = nn.Dropout(p=drop_rate)
self.pos_embed_event = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=4, stride=4)
# self.pos_embed_event = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=4, stride=4)
# self.pos_embed_event_z = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=3, stride=1)
# attn = CrossAttn(768, 4, 3072, 0.1, 'relu')
# self.cross_attn = Iter_attn(attn, 2)
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
blocks = []
ce_index = 0
self.ce_loc = ce_loc
for i in range(depth):
ce_keep_ratio_i = 1.0
if ce_loc is not None and i in ce_loc:
ce_keep_ratio_i = ce_keep_ratio[ce_index]
ce_index += 1
blocks.append(
CEBlock(
dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, drop=drop_rate,
attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer, act_layer=act_layer,
keep_ratio_search=ce_keep_ratio_i)
)
self.blocks = nn.Sequential(*blocks)
self.norm = norm_layer(embed_dim)
self.init_weights(weight_init)
def forward_features(self, z, x, event_z, event_x,
mask_z=None, mask_x=None,
ce_template_mask=None, ce_keep_rate=None,
return_last_attn=False
):
B, H, W = x.shape[0], x.shape[2], x.shape[3]
event_z = self.pos_embed_event(event_z) # [:,:,:,:1000]
event_x = self.pos_embed_event(event_x) # B 768 1024
x = self.patch_embed(x)
z = self.patch_embed(z)
event_z += self.pos_embed_z
event_x += self.pos_embed_x
z += self.pos_embed_z
x += self.pos_embed_x
# attention mask handling # B, H, W
if mask_z is not None and mask_x is not None:
mask_z = F.interpolate(mask_z[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0]
mask_z = mask_z.flatten(1).unsqueeze(-1)
mask_x = F.interpolate(mask_x[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0]
mask_x = mask_x.flatten(1).unsqueeze(-1)
mask_x = combine_tokens(mask_z, mask_x, mode=self.cat_mode)
mask_x = mask_x.squeeze(-1)
if self.add_cls_token:
cls_tokens = self.cls_token.expand(B, -1, -1)
cls_tokens = cls_tokens + self.cls_pos_embed
if self.add_sep_seg:
x += self.search_segment_pos_embed
z += self.template_segment_pos_embed
x = combine_tokens(z, event_z, x, event_x, mode=self.cat_mode) # 64+64+256+256=640
# x = combine_tokens(z, x, event_z, event_x, mode=self.cat_mode) # 64+64+256+256=640
if self.add_cls_token:
x = torch.cat([cls_tokens, x], dim=1)
x = self.pos_drop(x)
lens_z = self.pos_embed_z.shape[1]
lens_x = self.pos_embed_x.shape[1]
global_index_t = torch.linspace(0, lens_z - 1, lens_z).to(x.device)
global_index_t = global_index_t.repeat(B, 1)
global_index_s = torch.linspace(0, lens_x - 1, lens_x).to(x.device)
global_index_s = global_index_s.repeat(B, 1)
removed_indexes_s = []
for i, blk in enumerate(self.blocks):
x, global_index_t, global_index_s, removed_index_s, attn = \
blk(x, global_index_t, global_index_s, mask_x, ce_template_mask, ce_keep_rate)
if self.ce_loc is not None and i in self.ce_loc:
removed_indexes_s.append(removed_index_s)
x = self.norm(x)
lens_x_new = global_index_s.shape[1]
lens_z_new = global_index_t.shape[1]
z = x[:, :lens_z_new*2]
x = x[:, lens_z_new*2:]
if removed_indexes_s and removed_indexes_s[0] is not None:
removed_indexes_cat = torch.cat(removed_indexes_s, dim=1)
pruned_lens_x = lens_x - lens_x_new
pad_x = torch.zeros([B, pruned_lens_x, x.shape[2]], device=x.device)
x = torch.cat([x, pad_x], dim=1)
index_all = torch.cat([global_index_s, removed_indexes_cat], dim=1)
# recover original token order
C = x.shape[-1]
x = torch.zeros_like(x).scatter_(dim=1, index=index_all.unsqueeze(-1).expand(B, -1, C).to(torch.int64), src=x)
x = recover_tokens(x, lens_z_new, lens_x, mode=self.cat_mode)
x = x[:, :lens_x] # RGB head
x = torch.cat([event_x, x], dim=1)
# x = x[:, lens_x//2:] # event head
# x = torch.cat([z, x], dim=1)
# re-concatenate with the template, which may be further used by other modules
# x, event_x = x[:, :lens_x//2], x[:, lens_x//2:]
# x = x[:, -lens_x//2:]
aux_dict = {
"attn": attn,
"removed_indexes_s": removed_indexes_s, # used for visualization
}
return x, aux_dict
def forward(self, z, x, event_z, event_x,
ce_template_mask=None, ce_keep_rate=None,
tnc_keep_rate=None,
return_last_attn=False):
x, aux_dict = self.forward_features(z, x, event_z, event_x, ce_template_mask=ce_template_mask, ce_keep_rate=ce_keep_rate,)
return x, aux_dict
def _create_vision_transformer(pretrained=False, **kwargs):
model = VisionTransformerCE(**kwargs)
if pretrained:
if 'npz' in pretrained:
model.load_pretrained(pretrained, prefix='')
else:
checkpoint = torch.load(pretrained, map_location="cpu")
missing_keys, unexpected_keys = model.load_state_dict(checkpoint["model"], strict=False)
print('Load pretrained model from: ' + pretrained)
return model
def vit_base_patch16_224_ce(pretrained=False, **kwargs):
""" ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).
"""
model_kwargs = dict(
patch_size=16, embed_dim=768, depth=12, num_heads=12, **kwargs)
model = _create_vision_transformer(pretrained=pretrained, **model_kwargs)
return model
def vit_large_patch16_224_ce(pretrained=False, **kwargs):
""" ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929).
"""
model_kwargs = dict(
patch_size=16, embed_dim=1024, depth=24, num_heads=16, **kwargs)
model = _create_vision_transformer(pretrained=pretrained, **model_kwargs)
return model 和代码② import math
import logging
from functools import partial
from collections import OrderedDict
from copy import deepcopy
import torch
import torch.nn as nn
import torch.nn.functional as F
from timm.models.layers import to_2tuple
from lib.models.layers.patch_embed import PatchEmbed, PatchEmbed_event, xcorr_depthwise
from .utils import combine_tokens, recover_tokens
from .vit import VisionTransformer
from ..layers.attn_blocks import CEBlock
import random
import numpy as np
_logger = logging.getLogger(__name__)
class VisionTransformerCE(VisionTransformer):
""" Vision Transformer with candidate elimination (CE) module
A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale`
- https://arxiv.org/abs/2010.11929
Includes distillation token & head support for `DeiT: Data-efficient Image Transformers`
- https://arxiv.org/abs/2012.12877
"""
def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12,
num_heads=12, mlp_ratio=4., qkv_bias=True, representation_size=None, distilled=False,
drop_rate=0., attn_drop_rate=0., drop_path_rate=0., embed_layer=PatchEmbed, norm_layer=None,
act_layer=None, weight_init='',
ce_loc=None, ce_keep_ratio=None):
"""
Args:
img_size (int, tuple): input image size
patch_size (int, tuple): patch size
in_chans (int): number of input channels
num_classes (int): number of classes for classification head
embed_dim (int): embedding dimension
depth (int): depth of transformer
num_heads (int): number of attention heads
mlp_ratio (int): ratio of mlp hidden dim to embedding dim
qkv_bias (bool): enable bias for qkv if True
representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set
distilled (bool): model includes a distillation token and head as in DeiT models
drop_rate (float): dropout rate
attn_drop_rate (float): attention dropout rate
drop_path_rate (float): stochastic depth rate
embed_layer (nn.Module): patch embedding layer
norm_layer: (nn.Module): normalization layer
weight_init: (str): weight init scheme
"""
# super().__init__()
super().__init__()
if isinstance(img_size, tuple):
self.img_size = img_size
else:
self.img_size = to_2tuple(img_size)
self.patch_size = patch_size
self.in_chans = in_chans
self.num_classes = num_classes
self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
self.num_tokens = 2 if distilled else 1
norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
act_layer = act_layer or nn.GELU
self.patch_embed = embed_layer(
img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim)
num_patches = self.patch_embed.num_patches
self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
self.dist_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if distilled else None
self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + self.num_tokens, embed_dim))
self.pos_drop = nn.Dropout(p=drop_rate)
self.pos_embed_event = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=4, stride=4)
# self.pos_embed_event = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=4, stride=4)
# self.pos_embed_event_z = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=3, stride=1)
# attn = CrossAttn(768, 4, 3072, 0.1, 'relu')
# self.cross_attn = Iter_attn(attn, 2)
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
blocks = []
ce_index = 0
self.ce_loc = ce_loc
for i in range(depth):
ce_keep_ratio_i = 1.0
if ce_loc is not None and i in ce_loc:
ce_keep_ratio_i = ce_keep_ratio[ce_index]
ce_index += 1
blocks.append(
CEBlock(
dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, drop=drop_rate,
attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer, act_layer=act_layer,
keep_ratio_search=ce_keep_ratio_i)
)
self.blocks = nn.Sequential(*blocks)
self.norm = norm_layer(embed_dim)
self.init_weights(weight_init)
def masking_fea(self,z, event_z, x, event_x, ratio=0.8 ):
b,nz,c = z.shape
b,nez,c = event_z.shape
b,nx,c = x.shape
b,nex,c = event_x.shape
assert(nz == nez)
assert(nx == nex)
lenz_out = int(nz*ratio)
lenx_out = int(nx*ratio)
mask_nz = torch.rand(b,nz).float()
mask_ez = torch.rand(b,nez).float()
mask_nx = torch.rand(b,nx).float()
mask_ex = torch.rand(b,nex).float()
mask_nz = mask_nz>0.4
mask_ez = mask_ez>0.4
mask_ez = ~mask_nz + mask_ez
mask_nz_idx = mask_nz.float().sort(1,descending=True)[-1].to(device = z.device)
mask_ez_idx = mask_ez.float().sort(1,descending=True)[-1].to(device = z.device)
mask_nx = mask_nx>0.4
mask_ex = mask_ex>0.4
mask_ex = ~mask_nx + mask_ex
mask_nx_idx = mask_nx.float().sort(1,descending=True)[-1].to(device = z.device)
mask_ex_idx = mask_ex.float().sort(1,descending=True)[-1].to(device = z.device)
masked_z = torch.gather(z, 1, mask_nz_idx[:,:lenz_out,None].repeat([1,1,c]))
masked_ez = torch.gather(event_z, 1, mask_ez_idx[:,:lenz_out,None].repeat([1,1,c]))
masked_x = torch.gather(x, 1, mask_nx_idx[:,:lenx_out,None].repeat([1,1,c]))
masked_ex = torch.gather(event_x, 1, mask_ex_idx[:,:lenx_out,None].repeat([1,1,c]))
return masked_z, masked_ez, masked_x, masked_ex,{'x1':mask_nx_idx[:,:lenx_out],'x0':mask_nx_idx[:,lenx_out:],
'ex1':mask_ex_idx[:,:lenx_out],'ex0':mask_ex_idx[:,lenx_out:], }
def forward_features(self, z, x, event_z, event_x,
mask_z=None, mask_x=None,
ce_template_mask=None, ce_keep_rate=None,
return_last_attn=False,Track=False
):
B, H, W = x.shape[0], x.shape[2], x.shape[3]
# print('shape of event_z before projection:{}, event_x:{}'.format(event_z.shape, event_x.shape))
event_z = self.pos_embed_event(event_z) # [:,:,:,:1000]
event_x = self.pos_embed_event(event_x) # B 768 1024
x = self.patch_embed(x)
z = self.patch_embed(z)
# print('shape of event_z:{}, event_x:{}, x:{}, z:{}'.format(event_z.shape,event_x.shape,x.shape,z.shape ))
event_z += self.pos_embed_z
event_x += self.pos_embed_x
z += self.pos_embed_z
x += self.pos_embed_x
# attention mask handling # B, H, W
if mask_z is not None and mask_x is not None:
mask_z = F.interpolate(mask_z[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0]
mask_z = mask_z.flatten(1).unsqueeze(-1)
mask_x = F.interpolate(mask_x[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0]
mask_x = mask_x.flatten(1).unsqueeze(-1)
mask_x = combine_tokens(mask_z, mask_x, mode=self.cat_mode)
mask_x = mask_x.squeeze(-1)
if self.add_cls_token:
cls_tokens = self.cls_token.expand(B, -1, -1)
cls_tokens = cls_tokens + self.cls_pos_embed
if self.add_sep_seg:
x += self.search_segment_pos_embed
z += self.template_segment_pos_embed
if Track == False:
z, event_z, x, event_x, token_idx = self.masking_fea(z, event_z, x, event_x, ratio=0.9)
x = combine_tokens(z, event_z, x, event_x, mode=self.cat_mode) # 64+64+256+256=640
# x = combine_tokens(z, x, event_z, event_x, mode=self.cat_mode) # 64+64+256+256=640
if self.add_cls_token:
x = torch.cat([cls_tokens, x], dim=1)
x = self.pos_drop(x)
# lens_z = self.pos_embed_z.shape[1]
# lens_x = self.pos_embed_x.shape[1]
lens_z = z.shape[1]
lens_x = x.shape[1]
global_index_t = torch.linspace(0, lens_z - 1, lens_z).to(x.device)
global_index_t = global_index_t.repeat(B, 1)
global_index_s = torch.linspace(0, lens_x - 1, lens_x).to(x.device)
global_index_s = global_index_s.repeat(B, 1)
removed_indexes_s = []
out_attn = []
for i, blk in enumerate(self.blocks):
# out_global_s.append(global_index_s)
# out_global_t.append(global_index_t)
x, global_index_t, global_index_s, removed_index_s, attn = \
blk(x, global_index_t, global_index_s, mask_x, ce_template_mask, ce_keep_rate)
if self.ce_loc is not None and i in self.ce_loc:
removed_indexes_s.append(removed_index_s)
out_attn.append(attn)
# print('shape of attn:{}, lens_z:{}, lens_x:{}'.format(attn.shape, lens_z, lens_x))
out_attn_idx = random.choice(np.arange(len(out_attn)))
out_attn = out_attn[out_attn_idx]
x = self.norm(x)
lens_x_new = global_index_s.shape[1]
lens_z_new = global_index_t.shape[1]
z = x[:, :lens_z_new*2]
x = x[:, lens_z_new*2:]
if Track == False:
idx1 = token_idx['x1']
idx0 = token_idx['x0']
idex1 = token_idx['ex1']
idex0 = token_idx['ex0']
ex = x[:,idex1.shape[1]:]
x = x[:,:idex1.shape[1]]
# if removed_indexes_s and removed_indexes_s[0] is not None:
# removed_indexes_cat = torch.cat(removed_indexes_s, dim=1)
pruned_lens_x = idx0.shape[1]
pad_x = torch.zeros([B, pruned_lens_x, x.shape[2]], device=x.device)
x = torch.cat([x, pad_x], dim=1)
index_all = torch.cat([idx1, idx0], dim=1)
# recover original token order
C = x.shape[-1]
x = torch.zeros_like(x).scatter_(dim=1, index=index_all.unsqueeze(-1).expand(B, -1, C).to(torch.int64), src=x)
ex = torch.cat([ex, pad_x], dim=1)
index_all = torch.cat([idex1, idex0], dim=1)
# recover original token order
C = ex.shape[-1]
ex = torch.zeros_like(ex).scatter_(dim=1, index=index_all.unsqueeze(-1).expand(B, -1, C).to(torch.int64), src=ex)
x = torch.cat([x,ex],dim=1)
x = recover_tokens(x, lens_z_new, lens_x, mode=self.cat_mode)
event_x = x[:, lens_x:] # RGB head
x = x[:, :lens_x] # RGB head
x = torch.cat([event_x, x], dim=1)
aux_dict = {
# "attn": attn,
"attn": out_attn,
"removed_indexes_s": removed_indexes_s, # used for visualization
}
return x, aux_dict
def forward(self, z, x, event_z, event_x,
ce_template_mask=None, ce_keep_rate=None,
tnc_keep_rate=None,
return_last_attn=False,Track=False):
x, aux_dict = self.forward_features(z, x, event_z, event_x, ce_template_mask=ce_template_mask, ce_keep_rate=ce_keep_rate,Track=Track)
return x, aux_dict
def _create_vision_transformer(pretrained=False, **kwargs):
model = VisionTransformerCE(**kwargs)
if pretrained:
if 'npz' in pretrained:
model.load_pretrained(pretrained, prefix='')
else:
checkpoint = torch.load(pretrained, map_location="cpu")
missing_keys, unexpected_keys = model.load_state_dict(checkpoint["model"], strict=False)
print('Load pretrained model from: ' + pretrained)
return model
def vit_base_patch16_224_ce(pretrained=False, **kwargs):
""" ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).
"""
model_kwargs = dict(
patch_size=16, embed_dim=768, depth=12, num_heads=12, **kwargs)
model = _create_vision_transformer(pretrained=pretrained, **model_kwargs)
return model
def vit_large_patch16_224_ce(pretrained=False, **kwargs):
""" ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929).
"""
model_kwargs = dict(
patch_size=16, embed_dim=1024, depth=24, num_heads=16, **kwargs)
model = _create_vision_transformer(pretrained=pretrained, **model_kwargs)
return model中, if removed_indexes_s and removed_indexes_s[0] is not None:
removed_indexes_cat = torch.cat(removed_indexes_s, dim=1)
pruned_lens_x = lens_x - lens_x_new
pad_x = torch.zeros([B, pruned_lens_x, x.shape[2]], device=x.device)
x = torch.cat([x, pad_x], dim=1)
index_all = torch.cat([global_index_s, removed_indexes_cat], dim=1)
# recover original token order
C = x.shape[-1]
x = torch.zeros_like(x).scatter_(dim=1, index=index_all.unsqueeze(-1).expand(B, -1, C).to(torch.int64), src=x)
x = recover_tokens(x, lens_z_new, lens_x, mode=self.cat_mode)
x = x[:, :lens_x] # RGB head
x = torch.cat([event_x, x], dim=1)
# x = x[:, lens_x//2:] # event head
# x = torch.cat([z, x], dim=1)
# re-concatenate with the template, which may be further used by other modules
# x, event_x = x[:, :lens_x//2], x[:, lens_x//2:]
# x = x[:, -lens_x//2:]
aux_dict = {
"attn": attn,
"removed_indexes_s": removed_indexes_s, # used for visualization
}
return x, aux_dict和 if Track == False:
idx1 = token_idx['x1']
idx0 = token_idx['x0']
idex1 = token_idx['ex1']
idex0 = token_idx['ex0']
ex = x[:,idex1.shape[1]:]
x = x[:,:idex1.shape[1]]
# if removed_indexes_s and removed_indexes_s[0] is not None:
# removed_indexes_cat = torch.cat(removed_indexes_s, dim=1)
pruned_lens_x = idx0.shape[1]
pad_x = torch.zeros([B, pruned_lens_x, x.shape[2]], device=x.device)
x = torch.cat([x, pad_x], dim=1)
index_all = torch.cat([idx1, idx0], dim=1)
# recover original token order
C = x.shape[-1]
x = torch.zeros_like(x).scatter_(dim=1, index=index_all.unsqueeze(-1).expand(B, -1, C).to(torch.int64), src=x)
ex = torch.cat([ex, pad_x], dim=1)
index_all = torch.cat([idex1, idex0], dim=1)
# recover original token order
C = ex.shape[-1]
ex = torch.zeros_like(ex).scatter_(dim=1, index=index_all.unsqueeze(-1).expand(B, -1, C).to(torch.int64), src=ex)
x = torch.cat([x,ex],dim=1)
x = recover_tokens(x, lens_z_new, lens_x, mode=self.cat_mode)
event_x = x[:, lens_x:] # RGB head
x = x[:, :lens_x] # RGB head
x = torch.cat([event_x, x], dim=1)
aux_dict = {
# "attn": attn,
"attn": out_attn,
"removed_indexes_s": removed_indexes_s, # used for visualization
}
return x, aux_dict的处理有何不同?
|
97009423d1137857276731b18985ac7a
|
{
"intermediate": 0.31905272603034973,
"beginner": 0.427822083234787,
"expert": 0.2531251311302185
}
|
48,274
|
data: {"id":"chatcmpl-9Iy4dVbKz0Pa4n5Qn3wwMnhL6mWF3","object":"chat.completion.chunk","created":1714307855,"model":"gpt-4","system_fingerprint":"fp_2f57f81c11","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":null}]}。使用js代码截取掉上面字符串的data: 怎么写
|
c514e122cda9e932fd0706f9c97420d1
|
{
"intermediate": 0.3480824828147888,
"beginner": 0.2883206009864807,
"expert": 0.36359694600105286
}
|
48,275
|
fix my code, the cost_mat isnt changing after calling congestion function#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <unistd.h>
#define n 5
#define sparsity 50
void congestion(int (*cost_mat)[n]) {
srand(time(NULL));
int change = rand() % 2 - 1;
for (int i = 0; i < n; i++) {
for (int j = 0; j < i; j++) {
cost_mat[i][j] += (cost_mat[j][j] / 10) * change;
cost_mat[j][i] = cost_mat[i][j];
}
}
}
int main() {
// make a cost matrix
srand(time(NULL));
int cost_mat[n][n];
for (int i = 0; i < n; i++) {
for (int j = 0; j <= i; j++) {
if (i == j) {
cost_mat[i][j] = 0;
} else if (sparsity > rand() % 100) {
cost_mat[i][j] = rand() % 100;
cost_mat[j][i] = cost_mat[i][j];
} else {
cost_mat[i][j] = 0;
cost_mat[j][i] = cost_mat[i][j];
}
}
}
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
printf("%d ", cost_mat[i][j]);
}
printf("\n");
}
congestion(cost_mat);
printf("\n\n");
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
printf("%d ", cost_mat[i][j]);
}
printf("\n");
}
/* while(destination not reached1) {
sleep(1000);
congestion(cost_mat);
} */
}
|
34d431ce01c80806a0ae34572bc9ff83
|
{
"intermediate": 0.3093498945236206,
"beginner": 0.5046064853668213,
"expert": 0.1860436201095581
}
|
48,276
|
En el archivo ReplaceVideosPage.tsx hay un botón que dice "Replace all playlist items", cuando se presiona se activa una API, quiero que el texto del <pre> se muestre el texto "`${videoIds.length - failedItems} video(s) added to playlist successfully.`" de la respuesta del archivo "src\services\addVideosToPlaylist.js"
// src\features\playlist-management\components\ReplaceVideosPage.tsx
import { useEffect, useState } from "react";
import extractIDsFromUrls from "../../../services/extractIDsFromUrls";
import CustomButton from "./CustomButton";
import CustomTextarea from "./CustomTextarea";
import CustomInput from "./CustomInput";
const ReplaceVideosPage = () => {
const [youtubeUrls, setYoutubeUrls] = useState<string>("");
const [videoIds, setVideoIds] = useState<string[]>([]);
const [customPlaylistID, setCustomPlaylistId] = useState<string>("");
const [isLoading, setIsLoading] = useState<boolean>(false);
const replacePlaylistData = async () => {
try {
setIsLoading(true);
const responseGet = await fetch(`/api/playlists/${customPlaylistID}`);
if (!responseGet.ok) {
throw new Error(
`Get Playlist API request failed: ${responseGet.status}`
);
}
const dataGet = await responseGet.json();
const allIDs = dataGet.allItems.map((item: { id: string }) => item.id);
const IDsToAdd = videoIds.filter((id) => !allIDs.includes(id));
const IDsToRemove = allIDs.filter((id: string) => !videoIds.includes(id));
const responseCreate = await fetch(
`/api/createPlaylist?newItems=${encodeURIComponent(
JSON.stringify(IDsToAdd)
)}&customPlaylistId=${customPlaylistID}`
);
if (!responseCreate.ok) {
throw new Error(
`Create Playlist API request failed: ${responseCreate.status}`
);
}
const responseDelete = await fetch(
`/api/deleteFromPlaylist?newItems=${encodeURIComponent(
JSON.stringify(IDsToRemove)
)}&customPlaylistId=${customPlaylistID}`
);
if (!responseDelete.ok) {
throw new Error(
`Delete Playlist API request failed: ${responseDelete.status}`
);
}
setIsLoading(false);
} catch (error) {
console.error("Error processing Replace Playlist API:", error);
}
};
useEffect(() => {
setVideoIds(extractIDsFromUrls(youtubeUrls));
}, [youtubeUrls]);
return (
<main className="w-full max-w-lg mx-auto my-8 flex flex-col gap-4">
<CustomInput
title="Required Playlist ID*"
onChangeHandler={setCustomPlaylistId}
inputValue={customPlaylistID}
/>
<CustomTextarea
onChangeHandler={setYoutubeUrls}
placeholder="Enter youtube links..."
textareaValue={youtubeUrls}
/>
{videoIds.length > 0 && (
<>
<p># ID: {videoIds.length}</p>
<CustomButton
buttonText={
isLoading ? "Cargando..." : "Replace all playlist items"
}
isLoading={isLoading}
onClickHandler={replacePlaylistData}
/>
</>
)}
<div className="bg-gray-300 p-4 rounded-md">
<pre className="text-sm font-mono">
<code className="block">Tu código aquí...</code>
</pre>
</div>
</main>
);
};
export default ReplaceVideosPage;
// src\pages\api\createPlaylist.ts
import { google } from "googleapis";
import { promises as fs } from "fs";
import {
CLIENT_ID,
CLIENT_SECRET,
REDIRECTION_URI,
} from "../../models/credentials";
import createPlaylist from "../../services/createPlaylist";
const oAuth2Client: any = new google.auth.OAuth2(
CLIENT_ID,
CLIENT_SECRET,
REDIRECTION_URI
);
export default async function handler(req: any, res: any) {
try {
const {
newItems,
customPlaylistId,
}: { newItems: string; customPlaylistId?: string } = req.query;
const token: any = await fs.readFile("token.json");
const parsedToken = JSON.parse(token);
oAuth2Client.setCredentials(parsedToken);
const newPlaylistId = await createPlaylist({
auth: oAuth2Client,
newItems,
customPlaylistId,
});
res
.status(200)
.json({ message: "Playlist created successfully", newPlaylistId });
} catch (error) {
console.error("Error:", error);
res.status(500).json({ error: "Internal Server Error" });
}
}
// src\services\createPlaylist.ts
import { google } from "googleapis";
import addVideosToPlaylist from "./addVideosToPlaylist";
interface CreatePlaylistData {
auth: any;
newItems: string;
customPlaylistId?: string;
}
async function createPlaylist({
auth,
newItems,
customPlaylistId,
}: CreatePlaylistData): Promise<string> {
const youtube: any = google.youtube({ version: "v3", auth });
try {
let playlistId: string;
let videoIds: string[] = JSON.parse(newItems);
if (customPlaylistId) {
playlistId = customPlaylistId;
console.log("Using custom playlist ID:", playlistId);
const getPlaylistData = async (): Promise<string[]> => {
try {
const response = await fetch(
`http://localhost:3000/api/playlists/${playlistId}`
);
if (!response.ok) {
throw new Error(
`API request failed with status ${response.status}`
);
}
const data = await response.json();
const allIDs = data.allItems.map((item: any) => item.id);
videoIds = videoIds.filter((videoId) => !allIDs.includes(videoId));
return videoIds;
} catch (error) {
console.error("Error fetching playlist data:", error);
return [];
}
};
videoIds = await getPlaylistData();
} else {
const res = await youtube.playlists.insert({
part: ["snippet,status"],
resource: {
snippet: {
title: "Música",
description: "Descripción",
tags: ["Music"],
defaultLanguage: "es_MX",
},
status: {
privacyStatus: "unlisted",
},
},
});
playlistId = res.data.id;
console.log("Playlist created:", playlistId);
}
await addVideosToPlaylist(auth, playlistId, videoIds);
return playlistId;
} catch (err) {
console.error("Error creating playlist:", err);
throw err;
}
}
export default createPlaylist;
// src\services\addVideosToPlaylist.js
const { google } = require("googleapis");
async function addVideosToPlaylist(auth, playlistId, videoIds) {
const youtube = google.youtube({
version: "v3",
auth,
});
console.log(`Adding ${videoIds.length} videos to playlist.`);
let failedItems = 0;
let countIds = 0;
for (const videoId of videoIds) {
countIds++;
try {
await youtube.playlistItems.insert({
part: ["snippet"],
resource: {
snippet: {
playlistId: playlistId,
resourceId: {
kind: "youtube#video",
videoId: videoId,
},
},
},
});
console.log(`Added ${countIds} of ${videoIds.length}: ${videoId}`);
} catch (err) {
console.error(`Error ${countIds} of ${videoIds.length}: ${videoId}`);
failedItems++;
}
}
console.log(
`${videoIds.length - failedItems} video(s) added to playlist successfully.`
);
failedItems && console.log(`${failedItems} video(s) failed.`);
}
module.exports = addVideosToPlaylist;
|
96cb353ca9628b3ed5c90c629f26c07d
|
{
"intermediate": 0.4238790273666382,
"beginner": 0.45426642894744873,
"expert": 0.12185458838939667
}
|
48,277
|
En el archivo ReplaceVideosPage.tsx hay un botón que dice “Replace all playlist items”, cuando se presiona se activa una API, quiero que en el texto de la etiqueta PRE se muestre el texto de cada video agregao correctametne a ala playlist “Added ${countIds} of ${videoIds.length}: ${videoId}” de la respuesta del archivo “src\services\addVideosToPlaylist.js”
// src\features\playlist-management\components\ReplaceVideosPage.tsx
import { useEffect, useState } from “react”;
import extractIDsFromUrls from “…/…/…/services/extractIDsFromUrls”;
import CustomButton from “./CustomButton”;
import CustomTextarea from “./CustomTextarea”;
import CustomInput from “./CustomInput”;
const ReplaceVideosPage = () => {
const [youtubeUrls, setYoutubeUrls] = useState<string>(“”);
const [videoIds, setVideoIds] = useState<string[]>([]);
const [customPlaylistID, setCustomPlaylistId] = useState<string>(“”);
const [isLoading, setIsLoading] = useState<boolean>(false);
const replacePlaylistData = async () => {
try {
setIsLoading(true);
const responseGet = await fetch(/api/playlists/${customPlaylistID});
if (!responseGet.ok) {
throw new Error(
Get Playlist API request failed: ${responseGet.status}
);
}
const dataGet = await responseGet.json();
const allIDs = dataGet.allItems.map((item: { id: string }) => item.id);
const IDsToAdd = videoIds.filter((id) => !allIDs.includes(id));
const IDsToRemove = allIDs.filter((id: string) => !videoIds.includes(id));
const responseCreate = await fetch(
/api/createPlaylist?newItems=${encodeURIComponent(<br/> JSON.stringify(IDsToAdd)<br/> )}&customPlaylistId=${customPlaylistID}
);
if (!responseCreate.ok) {
throw new Error(
Create Playlist API request failed: ${responseCreate.status}
);
}
const responseDelete = await fetch(
/api/deleteFromPlaylist?newItems=${encodeURIComponent(<br/> JSON.stringify(IDsToRemove)<br/> )}&customPlaylistId=${customPlaylistID}
);
if (!responseDelete.ok) {
throw new Error(
Delete Playlist API request failed: ${responseDelete.status}
);
}
setIsLoading(false);
} catch (error) {
console.error(“Error processing Replace Playlist API:”, error);
}
};
useEffect(() => {
setVideoIds(extractIDsFromUrls(youtubeUrls));
}, [youtubeUrls]);
return (
<main className=“w-full max-w-lg mx-auto my-8 flex flex-col gap-4”>
<CustomInput
title=“Required Playlist ID*”
onChangeHandler={setCustomPlaylistId}
inputValue={customPlaylistID}
/>
<CustomTextarea
onChangeHandler={setYoutubeUrls}
placeholder=“Enter youtube links…”
textareaValue={youtubeUrls}
/>
{videoIds.length > 0 && (
<>
<p># ID: {videoIds.length}</p>
<CustomButton
buttonText={
isLoading ? “Cargando…” : “Replace all playlist items”
}
isLoading={isLoading}
onClickHandler={replacePlaylistData}
/>
</>
)}
<div className=“bg-gray-300 p-4 rounded-md”>
<pre className=“text-sm font-mono”>
<code className=“block”>Tu código aquí…</code>
</pre>
</div>
</main>
);
};
export default ReplaceVideosPage;
// src\pages\api\createPlaylist.ts
import { google } from “googleapis”;
import { promises as fs } from “fs”;
import {
CLIENT_ID,
CLIENT_SECRET,
REDIRECTION_URI,
} from “…/…/models/credentials”;
import createPlaylist from “…/…/services/createPlaylist”;
const oAuth2Client: any = new google.auth.OAuth2(
CLIENT_ID,
CLIENT_SECRET,
REDIRECTION_URI
);
export default async function handler(req: any, res: any) {
try {
const {
newItems,
customPlaylistId,
}: { newItems: string; customPlaylistId?: string } = req.query;
const token: any = await fs.readFile(“token.json”);
const parsedToken = JSON.parse(token);
oAuth2Client.setCredentials(parsedToken);
const newPlaylistId = await createPlaylist({
auth: oAuth2Client,
newItems,
customPlaylistId,
});
res
.status(200)
.json({ message: “Playlist created successfully”, newPlaylistId });
} catch (error) {
console.error(“Error:”, error);
res.status(500).json({ error: “Internal Server Error” });
}
}
// src\services\createPlaylist.ts
import { google } from “googleapis”;
import addVideosToPlaylist from “./addVideosToPlaylist”;
interface CreatePlaylistData {
auth: any;
newItems: string;
customPlaylistId?: string;
}
async function createPlaylist({
auth,
newItems,
customPlaylistId,
}: CreatePlaylistData): Promise<string> {
const youtube: any = google.youtube({ version: “v3”, auth });
try {
let playlistId: string;
let videoIds: string[] = JSON.parse(newItems);
if (customPlaylistId) {
playlistId = customPlaylistId;
console.log(“Using custom playlist ID:”, playlistId);
const getPlaylistData = async (): Promise<string[]> => {
try {
const response = await fetch(
http://localhost:3000/api/playlists/${playlistId}
);
if (!response.ok) {
throw new Error(
API request failed with status ${response.status}
);
}
const data = await response.json();
const allIDs = data.allItems.map((item: any) => item.id);
videoIds = videoIds.filter((videoId) => !allIDs.includes(videoId));
return videoIds;
} catch (error) {
console.error(“Error fetching playlist data:”, error);
return [];
}
};
videoIds = await getPlaylistData();
} else {
const res = await youtube.playlists.insert({
part: [“snippet,status”],
resource: {
snippet: {
title: “Música”,
description: “Descripción”,
tags: [“Music”],
defaultLanguage: “es_MX”,
},
status: {
privacyStatus: “unlisted”,
},
},
});
playlistId = res.data.id;
console.log(“Playlist created:”, playlistId);
}
await addVideosToPlaylist(auth, playlistId, videoIds);
return playlistId;
} catch (err) {
console.error(“Error creating playlist:”, err);
throw err;
}
}
export default createPlaylist;
// src\services\addVideosToPlaylist.js
const { google } = require(“googleapis”);
async function addVideosToPlaylist(auth, playlistId, videoIds) {
const youtube = google.youtube({
version: “v3”,
auth,
});
console.log(Adding ${videoIds.length} videos to playlist.);
let failedItems = 0;
let countIds = 0;
for (const videoId of videoIds) {
countIds++;
try {
await youtube.playlistItems.insert({
part: [“snippet”],
resource: {
snippet: {
playlistId: playlistId,
resourceId: {
kind: “youtube#video”,
videoId: videoId,
},
},
},
});
console.log(Added ${countIds} of ${videoIds.length}: ${videoId});
} catch (err) {
console.error(Error ${countIds} of ${videoIds.length}: ${videoId});
failedItems++;
}
}
console.log(
${videoIds.length - failedItems} video(s) added to playlist successfully.
);
failedItems && console.log(${failedItems} video(s) failed.);
}
module.exports = addVideosToPlaylist;
|
50030f83df28e34be3f2f13a64239dea
|
{
"intermediate": 0.3536686301231384,
"beginner": 0.4179261326789856,
"expert": 0.22840522229671478
}
|
48,278
|
hi
|
b6429903fce929ca8407790f2b6a9fec
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
48,279
|
En el archivo ReplaceVideosPage.tsx hay un botón que dice “Replace all playlist items”, cuando se presiona se activa una API, quiero que en el texto de la etiqueta PRE se muestre el texto de cada video agregao correctametne a ala playlist “Added ${countIds} of ${videoIds.length}: ${videoId}” de la respuesta del archivo “src\services\addVideosToPlaylist.js”
// src\features\playlist-management\components\ReplaceVideosPage.tsx
import { useEffect, useState } from “react”;
import extractIDsFromUrls from “…/…/…/services/extractIDsFromUrls”;
import CustomButton from “./CustomButton”;
import CustomTextarea from “./CustomTextarea”;
import CustomInput from “./CustomInput”;
const ReplaceVideosPage = () => {
const [youtubeUrls, setYoutubeUrls] = useState<string>(“”);
const [videoIds, setVideoIds] = useState<string[]>([]);
const [customPlaylistID, setCustomPlaylistId] = useState<string>(“”);
const [isLoading, setIsLoading] = useState<boolean>(false);
const replacePlaylistData = async () => {
try {
setIsLoading(true);
const responseGet = await fetch(/api/playlists/${customPlaylistID});
if (!responseGet.ok) {
throw new Error(
Get Playlist API request failed: ${responseGet.status}
);
}
const dataGet = await responseGet.json();
const allIDs = dataGet.allItems.map((item: { id: string }) => item.id);
const IDsToAdd = videoIds.filter((id) => !allIDs.includes(id));
const IDsToRemove = allIDs.filter((id: string) => !videoIds.includes(id));
const responseCreate = await fetch(
/api/createPlaylist?newItems=${encodeURIComponent(<br/> JSON.stringify(IDsToAdd)<br/> )}&customPlaylistId=${customPlaylistID}
);
if (!responseCreate.ok) {
throw new Error(
Create Playlist API request failed: ${responseCreate.status}
);
}
const responseDelete = await fetch(
/api/deleteFromPlaylist?newItems=${encodeURIComponent(<br/> JSON.stringify(IDsToRemove)<br/> )}&customPlaylistId=${customPlaylistID}
);
if (!responseDelete.ok) {
throw new Error(
Delete Playlist API request failed: ${responseDelete.status}
);
}
setIsLoading(false);
} catch (error) {
console.error(“Error processing Replace Playlist API:”, error);
}
};
useEffect(() => {
setVideoIds(extractIDsFromUrls(youtubeUrls));
}, [youtubeUrls]);
return (
<main className=“w-full max-w-lg mx-auto my-8 flex flex-col gap-4”>
<CustomInput
title=“Required Playlist ID*”
onChangeHandler={setCustomPlaylistId}
inputValue={customPlaylistID}
/>
<CustomTextarea
onChangeHandler={setYoutubeUrls}
placeholder=“Enter youtube links…”
textareaValue={youtubeUrls}
/>
{videoIds.length > 0 && (
<>
<p># ID: {videoIds.length}</p>
<CustomButton
buttonText={
isLoading ? “Cargando…” : “Replace all playlist items”
}
isLoading={isLoading}
onClickHandler={replacePlaylistData}
/>
</>
)}
<div className=“bg-gray-300 p-4 rounded-md”>
<pre className=“text-sm font-mono”>
<code className=“block”>Tu código aquí…</code>
</pre>
</div>
</main>
);
};
export default ReplaceVideosPage;
// src\pages\api\createPlaylist.ts
import { google } from “googleapis”;
import { promises as fs } from “fs”;
import {
CLIENT_ID,
CLIENT_SECRET,
REDIRECTION_URI,
} from “…/…/models/credentials”;
import createPlaylist from “…/…/services/createPlaylist”;
const oAuth2Client: any = new google.auth.OAuth2(
CLIENT_ID,
CLIENT_SECRET,
REDIRECTION_URI
);
export default async function handler(req: any, res: any) {
try {
const {
newItems,
customPlaylistId,
}: { newItems: string; customPlaylistId?: string } = req.query;
const token: any = await fs.readFile(“token.json”);
const parsedToken = JSON.parse(token);
oAuth2Client.setCredentials(parsedToken);
const newPlaylistId = await createPlaylist({
auth: oAuth2Client,
newItems,
customPlaylistId,
});
res
.status(200)
.json({ message: “Playlist created successfully”, newPlaylistId });
} catch (error) {
console.error(“Error:”, error);
res.status(500).json({ error: “Internal Server Error” });
}
}
// src\services\createPlaylist.ts
import { google } from “googleapis”;
import addVideosToPlaylist from “./addVideosToPlaylist”;
interface CreatePlaylistData {
auth: any;
newItems: string;
customPlaylistId?: string;
}
async function createPlaylist({
auth,
newItems,
customPlaylistId,
}: CreatePlaylistData): Promise<string> {
const youtube: any = google.youtube({ version: “v3”, auth });
try {
let playlistId: string;
let videoIds: string[] = JSON.parse(newItems);
if (customPlaylistId) {
playlistId = customPlaylistId;
console.log(“Using custom playlist ID:”, playlistId);
const getPlaylistData = async (): Promise<string[]> => {
try {
const response = await fetch(
http://localhost:3000/api/playlists/${playlistId}
);
if (!response.ok) {
throw new Error(
API request failed with status ${response.status}
);
}
const data = await response.json();
const allIDs = data.allItems.map((item: any) => item.id);
videoIds = videoIds.filter((videoId) => !allIDs.includes(videoId));
return videoIds;
} catch (error) {
console.error(“Error fetching playlist data:”, error);
return [];
}
};
videoIds = await getPlaylistData();
} else {
const res = await youtube.playlists.insert({
part: [“snippet,status”],
resource: {
snippet: {
title: “Música”,
description: “Descripción”,
tags: [“Music”],
defaultLanguage: “es_MX”,
},
status: {
privacyStatus: “unlisted”,
},
},
});
playlistId = res.data.id;
console.log(“Playlist created:”, playlistId);
}
await addVideosToPlaylist(auth, playlistId, videoIds);
return playlistId;
} catch (err) {
console.error(“Error creating playlist:”, err);
throw err;
}
}
export default createPlaylist;
// src\services\addVideosToPlaylist.js
const { google } = require(“googleapis”);
async function addVideosToPlaylist(auth, playlistId, videoIds) {
const youtube = google.youtube({
version: “v3”,
auth,
});
console.log(Adding ${videoIds.length} videos to playlist.);
let failedItems = 0;
let countIds = 0;
for (const videoId of videoIds) {
countIds++;
try {
await youtube.playlistItems.insert({
part: [“snippet”],
resource: {
snippet: {
playlistId: playlistId,
resourceId: {
kind: “youtube#video”,
videoId: videoId,
},
},
},
});
console.log(Added ${countIds} of ${videoIds.length}: ${videoId});
} catch (err) {
console.error(Error ${countIds} of ${videoIds.length}: ${videoId});
failedItems++;
}
}
console.log(
${videoIds.length - failedItems} video(s) added to playlist successfully.
);
failedItems && console.log(${failedItems} video(s) failed.);
}
module.exports = addVideosToPlaylist;
|
b00cc47439cc31e497736bad6a6f461e
|
{
"intermediate": 0.3536686301231384,
"beginner": 0.4179261326789856,
"expert": 0.22840522229671478
}
|
48,280
|
fix this code
import random
import sys
import networkx as nx
import matplotlib.pyplot as plt
n = 5
sparsity = 50
def congestion(cost_mat, G):
for i in range(n):
for j in range(i):
change = random.randint(-1, 1)
cost_mat[i][j] += (cost_mat[i][j] // 10) * change
if cost_mat[i][j] < 0:
cost_mat[i][j] = 0
cost_mat[j][i] = cost_mat[i][j]
G.edges[i, j]['weight'] = cost_mat[i][j]
G.edges[j, i]['weight'] = cost_mat[i][j]
draw_graph(G)
def dijkstra(cost_mat, src, G):
dist = [sys.maxsize] * n
sptSet = [False] * n
dist[src] = 0
for _ in range(n - 1):
u = min_distance(dist, sptSet)
sptSet[u] = True
congestion(cost_mat, G) # Update cost matrix after expanding a node
for v in range(n):
if not sptSet[v] and cost_mat[u][v] and dist[u] != sys.maxsize and dist[u] + cost_mat[u][v] < dist[v]:
dist[v] = dist[u] + cost_mat[u][v]
return dist
def min_distance(dist, sptSet):
min_val = sys.maxsize
min_index = -1
for v in range(n):
if not sptSet[v] and dist[v] <= min_val:
min_val = dist[v]
min_index = v
return min_index
def generate_cost_matrix():
cost_mat = [[0] * n for _ in range(n)]
for i in range(n):
for j in range(i + 1, n):
if random.randint(0, 100) < sparsity:
cost = random.randint(1, 100)
cost_mat[i][j] = cost
cost_mat[j][i] = cost
return cost_mat
def draw_graph(G):
pos = nx.spring_layout(G)
plt.figure()
nx.draw(G, pos, with_labels=True)
edge_labels = nx.get_edge_attributes(G, 'weight')
nx.draw_networkx_edge_labels(G, pos, edge_labels=edge_labels)
plt.show()
def main():
cost_mat = generate_cost_matrix()
G = nx.Graph()
G.add_nodes_from(range(n))
for i in range(n):
for j in range(i + 1, n):
if cost_mat[i][j] != 0:
G.add_edge(i, j, weight=cost_mat[i][j])
src = random.randint(0, n - 1)
dest = random.randint(0, n - 1)
while dest == src:
dest = random.randint(0, n - 1)
print(f"Source: {src}, Destination: {dest}")
dist = dijkstra(cost_mat, src, G)
print("Vertex\tDistance from Source")
for i in range(n):
print(f"{i}\t{dist[i]}")
if __name__ == "__main__":
main()
|
1b86e433a209f821e69ae4fffc9af6b6
|
{
"intermediate": 0.33922049403190613,
"beginner": 0.3952609598636627,
"expert": 0.2655186057090759
}
|
48,281
|
Teach me how to deploy applications or virtual environments like Windows in docker
|
542127593a4ca335a9425578003735ae
|
{
"intermediate": 0.6440190672874451,
"beginner": 0.14560343325138092,
"expert": 0.21037746965885162
}
|
48,282
|
this is my code:
import numpy as np
def cost(X, y, theta, l2_lambda):
""" A cost function.
Parameters
----------
X: Training data of shape (n_samples, n_features)
y: Target values of shape (n_samples,)
theta: Parameters of shape (n_features,)
l2_lambda: L2 regularization parameter
Returns
-------
The value of the cost function
"""
m = len(y)
error = np.dot(X, theta) - y
regularization_term = (l2_lambda / (2 * m)) * np.sum(theta ** 2)
J = (1 / (2 * m)) * np.sum(error ** 2) + regularization_term
return J
def gradient(X, y, theta, l2_lambda):
""" Gradient of cost function.
Parameters
----------
X: Training data (n_samples, n_features)
y: Target valuese (n_samples,)
theta: Parameters of shape (n_features,)
l2_lambda: L2 regularization parameter
Returns
-------
Gradient of shape (n_features,)
"""
m = len(y)
error = np.dot(X, theta) - y
gradient = (1 / m) * np.dot(X.T, error)
#regularization_term = (l2_lambda / m) * theta
#regularization_term[0] = 0 # Exclude regularization for bias term
regularization_term = (l2_lambda / m) * theta
regularization_term[0] = 0 # Exclude regularization for bias term
return gradient + regularization_term
def gradient_descent(
X,
y,
l2_lambda,
lr=0.09,
tol=1e-13,
max_iter=10000000):
""" Implementation of gradient descent.
Parameters
----------
X: Training data of shape (n_samples, n_features)
y: Target values of shape (n_samples,)
l2_lambda: L2 regularization parameter
lr: The learning rate.
tol: The stopping criterion (tolerance).
max_iter: The maximum number of passes (aka epochs).
Returns
-------
The parameters theta of shape (n_features,)
"""
m, n = X.shape
theta = np.zeros(n)
for _ in range(max_iter):
gradient_theta = gradient(X, y, theta, l2_lambda)
theta -= lr * gradient_theta
if np.linalg.norm(gradient_theta) < tol:
break
return theta
class LinearRegression:
def __init__(self, l2_lambda = 0):
self.coefs = None
self.intercept = None
self.l2_lambda = l2_lambda
def fit(self, X, y):
"""
The fit method of LinearRegression accepts X and y
as input and save the coefficients of the linear model.
Parameters
----------
X: Training data of shape (n_samples, n_features)
y: Target values of shape (n_samples,)
Returns
-------
None
"""
m, n = X.shape
X = np.column_stack((np.ones(m), X)) # Add bias term
self.coefs = gradient_descent(X, y, self.l2_lambda)
self.intercept = self.coefs[0]
self.coefs = self.coefs[1:]
def predict(self, X: np.ndarray) -> np.ndarray:
"""Predict using the linear model.
Parameters
----------
X: Test data of shape (n_samples, n_features)
Returns
-------
Returns predicted values of shape (n_samples,)
"""
m = X.shape[0]
X = np.column_stack((np.ones(m), X)) # Add bias term
return np.dot(X, np.hstack((self.intercept, self.coefs)))
this is the test file:
import unittest
import ast
import inspect
import time
import numpy as np
def uses_loop(function):
for node in ast.walk(ast.parse(inspect.getsource(function))):
if isinstance(node, ast.Name) and node.id == "map":
return True
elif isinstance(node, (ast.For, ast.While, ast.ListComp)):
return True
# Throw error also if NotImplementedError is raised
elif isinstance(node, ast.Raise):
return True
class A_TestOutputValues(unittest.TestCase):
def setUp(self) -> None:
rng = np.random.RandomState(0)
self.y = rng.randn(1)
self.X = rng.randn(1, 1)
def test_010_fit_return(self):
from hw4 import LinearRegression
lr = LinearRegression()
fit_return = lr.fit(self.X, self.y)
self.assertIsNone(fit_return, f"fit() method should return None, but got: {type(fit_return)}")
self.assertIsNotNone(lr.coefs, "fit() method should set self.coefs")
self.assertIsNotNone(lr.intercept, "fit() method should set self.intercept")
def test_020_predict_return(self):
from hw4 import LinearRegression
lr = LinearRegression()
lr.fit(self.X, self.y)
predict_return = lr.predict(self.X)
self.assertIsNotNone(predict_return, f"predict() method should return predicted values, but got: {type(predict_return)}")
self.assertEqual(len(predict_return), len(self.X), f"predict() method should return predictions of length {len(self.X)}, but got: {len(predict_return)}")
class B_TestCostFunction(unittest.TestCase):
def test_010_cost_grad_lambda_0(self):
from hw4 import gradient, cost
rng = np.random.RandomState(0)
y = rng.randn(10)
X = rng.randn(10, 5)
_, cols = X.shape
theta0 = np.ones(cols)
grad = gradient(X, y, theta0, 0)
def cost_(theta):
return cost(X, y, theta, 0)
eps = 10 ** -4
theta0_ = theta0
grad_num = np.zeros(grad.shape)
for i in range(grad.size):
theta0_[i] += eps
h = cost_(theta0_)
theta0_[i] -= 2 * eps
l = cost_(theta0_)
theta0_[i] += eps
grad_num[i] = (h - l) / (2 * eps)
np.testing.assert_almost_equal(grad, grad_num, decimal=4)
def test_020_cost_grad_lambda_1(self):
from hw4 import gradient, cost
rng = np.random.RandomState(0)
y = rng.randn(10)
X = rng.randn(10, 5)
_, cols = X.shape
theta0 = np.ones(cols)
grad = gradient(X, y, theta0, 1)
def cost_(theta):
return cost(X, y, theta, 1)
eps = 10 ** -4
theta0_ = theta0
grad_num = np.zeros(grad.shape)
for i in range(grad.size):
theta0_[i] += eps
h = cost_(theta0_)
theta0_[i] -= 2 * eps
l = cost_(theta0_)
theta0_[i] += eps
grad_num[i] = (h - l) / (2 * eps)
np.testing.assert_almost_equal(grad, grad_num, decimal=4)
class C_TestLinearRegressoin(unittest.TestCase):
def setUp(self) -> None:
self.X = np.array(
[[0.7, 0.9],
[0.1, 0.7],
[0.2, 0.8],
[0.0, 0.1],
[0.5, 0.0],
[0.6, 0.6]]
)
self.y = np.array(
[7.5, 6.5, 6.8, 5.2, 5.5, 6.8]
)
self.intercept = 5.0
self.coefs = np.array([1.0, 2.0])
self.X_test = np.array([[0.8, 0.5], [0.3, 0.2], [0.9, 0.3], [0.4, 0.4]])
# without regularization
self.y_test = np.array([6.8, 5.7, 6.5, 6.2])
# with regularization
self.y_test_reg = {
1: np.array([6.54893014, 6.08570555, 6.41364697, 6.30108098]),
10: np.array([6.40968794, 6.33450745, 6.38725879, 6.36971714]),
}
self.coefs_reg = {
1: np.array([0.40046129, 0.87664647]),
10: np.array([0.06390273, 0.1440971]),
}
self.intercept_reg = {1: 5.790237870282883, 10: 6.286517209960804}
def test_010_regularized_intercept(self):
from hw4 import LinearRegression
lr = LinearRegression(1)
lr.fit(self.X, self.y)
if lr.intercept < self.intercept:
raise ValueError(
f"Check your implementation. Seems like your intercept is regularized. Think about how to remove it from regularization."
)
def test_020_GD_no_regularization_correct_fit(self):
from hw4 import LinearRegression
lr = LinearRegression(0)
lr.fit(self.X, self.y)
fit_coefs = lr.coefs
fit_intercept = lr.intercept
np.testing.assert_almost_equal(fit_coefs, self.coefs, decimal=4,
err_msg="Gradient seem to produce different results than expected If close, try adjusting the threshold for convergence.")
np.testing.assert_almost_equal(fit_intercept, self.intercept, decimal=4)
def test_021_GD_no_regularization_correct_predict(self):
from hw4 import LinearRegression
lr = LinearRegression(0)
lr.fit(self.X, self.y)
y_pred = lr.predict(self.X_test)
np.testing.assert_almost_equal(y_pred, self.y_test, decimal=4)
def test_030_regularization_1_correct_fit(self):
from hw4 import LinearRegression
lr = LinearRegression(1)
lr.fit(self.X, self.y)
fit_coefs = lr.coefs
fit_intercept = lr.intercept
np.testing.assert_almost_equal(fit_coefs, self.coefs_reg[1], decimal=4,
err_msg="Regularized Gradient seem to produce different results than expected. If close, try adjusting the threshold for convergence or check your gradient for errors.")
np.testing.assert_almost_equal(fit_intercept, self.intercept_reg[1], decimal=4)
def test_031_regularization_1_correct_prediction(self):
from hw4 import LinearRegression
lr = LinearRegression(1)
lr.fit(self.X, self.y)
y_pred = lr.predict(self.X_test)
np.testing.assert_almost_equal(y_pred, self.y_test_reg[1], decimal=4)
def test_040_regularization_10_correct_fit(self):
from hw4 import LinearRegression
lr = LinearRegression(10.0)
lr.fit(self.X, self.y)
fit_coefs = lr.coefs
fit_intercept = lr.intercept
np.testing.assert_almost_equal(fit_coefs, self.coefs_reg[10], decimal=4,
err_msg="Regularized Gradient seem to produce different results than expected. If close, try adjusting the threshold for convergence or check your gradient for errors.")
np.testing.assert_almost_equal(fit_intercept, self.intercept_reg[10], decimal=4)
def test_041_regularization_10_correct_prediction(self):
from hw4 import LinearRegression
lr = LinearRegression(10.0)
lr.fit(self.X, self.y)
y_pred = lr.predict(self.X_test)
np.testing.assert_almost_equal(y_pred, self.y_test_reg[10], decimal=4)
class D_TestVectorizedImplementation(unittest.TestCase):
def test_010_vectorized(self):
from hw4 import LinearRegression, cost, gradient
self.assertFalse(
uses_loop(cost), "Implementation of cost function is not vectorized."
)
self.assertFalse(
uses_loop(gradient), "Implementation of gradient is not vectorized."
)
self.assertFalse(
uses_loop(LinearRegression),
"Methods in LR class should not have loops.",
)
def test_020_runtime(self):
from hw4 import LinearRegression
rng = np.random.RandomState(0)
num_of_samples = 1_000
num_of_features = 500
y = rng.randn(num_of_samples)
X = rng.randn(num_of_samples, num_of_features)
timeout = 15
start = time.time()
lr = LinearRegression(0)
lr.fit(X, y)
end = time.time()
self.assertLess(end - start, timeout, "Time taken to fit the model is too long.")
if __name__ == "__main__":
unittest.main()
and this is the error i get:
FAIL: test_020_cost_grad_lambda_1 (__main__.B_TestCostFunction.test_020_cost_grad_lambda_1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Users\grabn\Mass storage\FAKS\Tretji letnik\UOZP\hw4-linear-regression-BGrabnar\test_hw4.py", line 105, in test_020_cost_grad_lambda_1
np.testing.assert_almost_equal(grad, grad_num, decimal=4)
File "C:\Users\grabn\AppData\Local\Programs\Python\Python311\Lib\contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\grabn\AppData\Local\Programs\Python\Python311\Lib\site-packages\numpy\testing\_private\utils.py", line 588, in assert_almost_equal
return assert_array_almost_equal(actual, desired, decimal, err_msg)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\grabn\AppData\Local\Programs\Python\Python311\Lib\contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\grabn\AppData\Local\Programs\Python\Python311\Lib\site-packages\numpy\testing\_private\utils.py", line 1099, in assert_array_almost_equal
assert_array_compare(compare, x, y, err_msg=err_msg, verbose=verbose, File "C:\Users\grabn\AppData\Local\Programs\Python\Python311\Lib\contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\grabn\AppData\Local\Programs\Python\Python311\Lib\site-packages\numpy\testing\_private\utils.py", line 862, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 4 decimals
Mismatched elements: 1 / 5 (20%)
Max absolute difference: 0.1
Max relative difference: 0.09773273
x: array([0.9232, 1.278 , 1.4153, 0.8224, 0.2027])
y: array([1.0232, 1.278 , 1.4153, 0.8224, 0.2027])
|
f68f8a340841df122d70f01e87cbecad
|
{
"intermediate": 0.2740355432033539,
"beginner": 0.46847403049468994,
"expert": 0.25749051570892334
}
|
48,283
|
this is the test file i need to comply with: import unittest
import ast
import inspect
import time
import numpy as np
def uses_loop(function):
for node in ast.walk(ast.parse(inspect.getsource(function))):
if isinstance(node, ast.Name) and node.id == "map":
return True
elif isinstance(node, (ast.For, ast.While, ast.ListComp)):
return True
# Throw error also if NotImplementedError is raised
elif isinstance(node, ast.Raise):
return True
class A_TestOutputValues(unittest.TestCase):
def setUp(self) -> None:
rng = np.random.RandomState(0)
self.y = rng.randn(1)
self.X = rng.randn(1, 1)
def test_010_fit_return(self):
from hw4 import LinearRegression
lr = LinearRegression()
fit_return = lr.fit(self.X, self.y)
self.assertIsNone(fit_return, f"fit() method should return None, but got: {type(fit_return)}")
self.assertIsNotNone(lr.coefs, "fit() method should set self.coefs")
self.assertIsNotNone(lr.intercept, "fit() method should set self.intercept")
def test_020_predict_return(self):
from hw4 import LinearRegression
lr = LinearRegression()
lr.fit(self.X, self.y)
predict_return = lr.predict(self.X)
self.assertIsNotNone(predict_return, f"predict() method should return predicted values, but got: {type(predict_return)}")
self.assertEqual(len(predict_return), len(self.X), f"predict() method should return predictions of length {len(self.X)}, but got: {len(predict_return)}")
class B_TestCostFunction(unittest.TestCase):
def test_010_cost_grad_lambda_0(self):
from hw4 import gradient, cost
rng = np.random.RandomState(0)
y = rng.randn(10)
X = rng.randn(10, 5)
_, cols = X.shape
theta0 = np.ones(cols)
grad = gradient(X, y, theta0, 0)
def cost_(theta):
return cost(X, y, theta, 0)
eps = 10 ** -4
theta0_ = theta0
grad_num = np.zeros(grad.shape)
for i in range(grad.size):
theta0_[i] += eps
h = cost_(theta0_)
theta0_[i] -= 2 * eps
l = cost_(theta0_)
theta0_[i] += eps
grad_num[i] = (h - l) / (2 * eps)
np.testing.assert_almost_equal(grad, grad_num, decimal=4)
def test_020_cost_grad_lambda_1(self):
from hw4 import gradient, cost
rng = np.random.RandomState(0)
y = rng.randn(10)
X = rng.randn(10, 5)
_, cols = X.shape
theta0 = np.ones(cols)
grad = gradient(X, y, theta0, 1)
def cost_(theta):
return cost(X, y, theta, 1)
eps = 10 ** -4
theta0_ = theta0
grad_num = np.zeros(grad.shape)
for i in range(grad.size):
theta0_[i] += eps
h = cost_(theta0_)
theta0_[i] -= 2 * eps
l = cost_(theta0_)
theta0_[i] += eps
grad_num[i] = (h - l) / (2 * eps)
np.testing.assert_almost_equal(grad, grad_num, decimal=4)
class C_TestLinearRegressoin(unittest.TestCase):
def setUp(self) -> None:
self.X = np.array(
[[0.7, 0.9],
[0.1, 0.7],
[0.2, 0.8],
[0.0, 0.1],
[0.5, 0.0],
[0.6, 0.6]]
)
self.y = np.array(
[7.5, 6.5, 6.8, 5.2, 5.5, 6.8]
)
self.intercept = 5.0
self.coefs = np.array([1.0, 2.0])
self.X_test = np.array([[0.8, 0.5], [0.3, 0.2], [0.9, 0.3], [0.4, 0.4]])
# without regularization
self.y_test = np.array([6.8, 5.7, 6.5, 6.2])
# with regularization
self.y_test_reg = {
1: np.array([6.54893014, 6.08570555, 6.41364697, 6.30108098]),
10: np.array([6.40968794, 6.33450745, 6.38725879, 6.36971714]),
}
self.coefs_reg = {
1: np.array([0.40046129, 0.87664647]),
10: np.array([0.06390273, 0.1440971]),
}
self.intercept_reg = {1: 5.790237870282883, 10: 6.286517209960804}
def test_010_regularized_intercept(self):
from hw4 import LinearRegression
lr = LinearRegression(1)
lr.fit(self.X, self.y)
if lr.intercept < self.intercept:
raise ValueError(
f"Check your implementation. Seems like your intercept is regularized. Think about how to remove it from regularization."
)
def test_020_GD_no_regularization_correct_fit(self):
from hw4 import LinearRegression
lr = LinearRegression(0)
lr.fit(self.X, self.y)
fit_coefs = lr.coefs
fit_intercept = lr.intercept
np.testing.assert_almost_equal(fit_coefs, self.coefs, decimal=4,
err_msg="Gradient seem to produce different results than expected If close, try adjusting the threshold for convergence.")
np.testing.assert_almost_equal(fit_intercept, self.intercept, decimal=4)
def test_021_GD_no_regularization_correct_predict(self):
from hw4 import LinearRegression
lr = LinearRegression(0)
lr.fit(self.X, self.y)
y_pred = lr.predict(self.X_test)
np.testing.assert_almost_equal(y_pred, self.y_test, decimal=4)
def test_030_regularization_1_correct_fit(self):
from hw4 import LinearRegression
lr = LinearRegression(1)
lr.fit(self.X, self.y)
fit_coefs = lr.coefs
fit_intercept = lr.intercept
np.testing.assert_almost_equal(fit_coefs, self.coefs_reg[1], decimal=4,
err_msg="Regularized Gradient seem to produce different results than expected. If close, try adjusting the threshold for convergence or check your gradient for errors.")
np.testing.assert_almost_equal(fit_intercept, self.intercept_reg[1], decimal=4)
def test_031_regularization_1_correct_prediction(self):
from hw4 import LinearRegression
lr = LinearRegression(1)
lr.fit(self.X, self.y)
y_pred = lr.predict(self.X_test)
np.testing.assert_almost_equal(y_pred, self.y_test_reg[1], decimal=4)
def test_040_regularization_10_correct_fit(self):
from hw4 import LinearRegression
lr = LinearRegression(10.0)
lr.fit(self.X, self.y)
fit_coefs = lr.coefs
fit_intercept = lr.intercept
np.testing.assert_almost_equal(fit_coefs, self.coefs_reg[10], decimal=4,
err_msg="Regularized Gradient seem to produce different results than expected. If close, try adjusting the threshold for convergence or check your gradient for errors.")
np.testing.assert_almost_equal(fit_intercept, self.intercept_reg[10], decimal=4)
def test_041_regularization_10_correct_prediction(self):
from hw4 import LinearRegression
lr = LinearRegression(10.0)
lr.fit(self.X, self.y)
y_pred = lr.predict(self.X_test)
np.testing.assert_almost_equal(y_pred, self.y_test_reg[10], decimal=4)
class D_TestVectorizedImplementation(unittest.TestCase):
def test_010_vectorized(self):
from hw4 import LinearRegression, cost, gradient
self.assertFalse(
uses_loop(cost), "Implementation of cost function is not vectorized."
)
self.assertFalse(
uses_loop(gradient), "Implementation of gradient is not vectorized."
)
self.assertFalse(
uses_loop(LinearRegression),
"Methods in LR class should not have loops.",
)
def test_020_runtime(self):
from hw4 import LinearRegression
rng = np.random.RandomState(0)
num_of_samples = 1_000
num_of_features = 500
y = rng.randn(num_of_samples)
X = rng.randn(num_of_samples, num_of_features)
timeout = 15
start = time.time()
lr = LinearRegression(0)
lr.fit(X, y)
end = time.time()
self.assertLess(end - start, timeout, "Time taken to fit the model is too long.")
if __name__ == "__main__":
unittest.main() and this is the starting file i need to fill in:
import numpy as np
def cost(X, y, theta, l2_lambda):
""" A cost function.
Parameters
----------
X: Training data of shape (n_samples, n_features)
y: Target values of shape (n_samples,)
theta: Parameters of shape (n_features,)
l2_lambda: L2 regularization parameter
Returns
-------
The value of the cost function
"""
raise NotImplementedError
def gradient(X, y, theta, l2_lambda):
""" Gradient of cost function.
Parameters
----------
X: Training data (n_samples, n_features)
y: Target valuese (n_samples,)
theta: Parameters of shape (n_features,)
l2_lambda: L2 regularization parameter
Returns
-------
Gradient of shape (n_features,)
"""
raise NotImplementedError
def gradient_descent(
X,
y,
l2_lambda,
lr=0.01,
tol=1e-6,
max_iter=100_000):
""" Implementation of gradient descent.
Parameters
----------
X: Training data of shape (n_samples, n_features)
y: Target values of shape (n_samples,)
l2_lambda: L2 regularization parameter
lr: The learning rate.
tol: The stopping criterion (tolerance).
max_iter: The maximum number of passes (aka epochs).
Returns
-------
The parameters theta of shape (n_features,)
"""
raise NotImplementedError
class LinearRegression:
def __init__(self, l2_lambda = 0):
self.coefs = None
self.intercept = None
self.l2_lambda = l2_lambda
def fit(self, X, y):
"""
The fit method of LinearRegression accepts X and y
as input and save the coefficients of the linear model.
Parameters
----------
X: Training data of shape (n_samples, n_features)
y: Target values of shape (n_samples,)
Returns
-------
None
"""
raise NotImplementedError
def predict(self, X: np.ndarray) -> np.ndarray:
"""Predict using the linear model.
Parameters
----------
X: Test data of shape (n_samples, n_features)
Returns
-------
Returns predicted values of shape (n_samples,)
"""
raise NotImplementedError
|
725c6ddaffa574871ca55d6cf8786c09
|
{
"intermediate": 0.2814271152019501,
"beginner": 0.437794029712677,
"expert": 0.2807788848876953
}
|
48,284
|
Task 1: Basic understanding of the system
a) Obtain the unit step response of the given open-loop transfer function:
G(s) = (-0.0717s^3 + 1.684s^2 + 0.0853s + 0.0622) / (s^4 + 1.0604s^3 - 1.1154s^2 - 0.066s - 0.0512)
b) Find the poles and zeros of G(s) and check if it has any poles in the right-half plane (indicating instability).
Task 2: Making the system compatible for Bode plot based controller design
a) Use MATLAB to plot the Nyquist plot of G(s). From the Nyquist plot, determine a suitable value of Kf such that the effective inner loop transfer function G̃(s) = KfG(s)/(1+KfG(s)) is stable.
b) Plot the step response and pole-zero map of G̃(s) to verify its stability.
c) Check if the step response of G̃(s) meets any of the three design criteria given.
Task 3: Meeting the steady-state error criteria
a) Assuming C(s) = C1(s)C2(s), with C2(s) being a Type-0 system, find C1(s) such that the closed-loop system has zero steady-state error for a step input reference.
Task 4: Meeting the settling time and maximum overshoot criteria
a) Obtain the Bode plot of H̃(s) = -C1(s)G̃(s) and find its gain and phase margins.
b) Check if a simple proportional controller C2(s) = Kp can meet the settling time and overshoot specs.
c) If not, determine the structure of C2(s) using the Bode plot of H̃(s) such that it meets the specs with stability.
d) Tune the parameters of the selected C2(s) structure to meet the settling time (0.05 seconds) and maximum overshoot (20%) criteria.
Task 5: Simulate the closed-loop control system
a) Find the state-space model of the designed controller C(s) = -C1(s)C2(s).
b) Find the state-space model of the open-loop plant G(s) using tf2ss().
c) Combine the controller and plant models to get the closed-loop state-space model.
d) Write a MATLAB script simulator.m that simulates this closed-loop model for a given reference r(t) and disturbance ε(t) = A*sin(ωt) using ode45(). It should accept amplitude A, frequency ω, and initial conditions α(0), α_dot(0) as inputs and plot α(t).
Deliverables:
report.pdf - Document all steps, justifications, plots, derivations.
design_process.m - Relevant MATLAB codes for each task, well commented.
simulator.m - The simulation script as per Task 5(d) instructions.
|
290bce1ecd0f78671317699f6de94bfc
|
{
"intermediate": 0.3264450430870056,
"beginner": 0.24995459616184235,
"expert": 0.423600435256958
}
|
48,285
|
could you fix the following data point prediction code in python
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
import sqlite3
import subprocess
# Connect to the SQLite database
conn = sqlite3.connect('curves.db')
cur = conn.cursor()
# Create table to store completed curves if not exists
cur.execute('''CREATE TABLE IF NOT EXISTS curves
(x REAL, y REAL)''')
conn.commit()
# Function to create and update the linear regression model
def update_model(x, y):
model = LinearRegression()
model.fit(x, y)
return model
# Function to plot the current line and the data points
def plot_line(x, y, model):
plt.scatter(x, y, color='blue')
plt.plot(x, model.predict(x), color='red')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Predictive Line')
plt.show()
# Function to add data points to the database
def add_to_database(x, y):
data = list(zip(x, y))
cur.executemany("INSERT INTO curves VALUES (?, ?)", data)
conn.commit()
print("Data points added to the database:", data) # Print added data points
# Function to retrieve completed curves from the database
def retrieve_curves():
cur.execute("SELECT * FROM curves")
rows = cur.fetchall()
x, y = zip(*rows)
return np.array(x).reshape(-1, 1), np.array(y)
# Main function
def main():
# Retrieve completed curves from the database
x_database, y_database = retrieve_curves()
# If there are completed curves, update the model with the existing data
if len(x_database) > 0:
model = update_model(x_database, y_database)
else:
model = None
# Start adding data points
while True:
if model is None:
# Call data_point.py to generate initial data point
x_new, y_new = subprocess.check_output(["python", "data_point.py"]).decode().strip().split(",")
x_new = float(x_new)
y_new = float(y_new)
x_database = np.array([[x_new]])
y_database = np.array([y_new])
model = LinearRegression()
model.fit(x_database, y_database)
plot_line(x_database, y_database, model)
add_to_database([x_new], [y_new])
else:
x_new, y_new = subprocess.check_output(["python", "data_point.py"]).decode().strip().split(",")
x_new = float(x_new)
y_new = float(y_new)
x_database = np.append(x_database, [[x_new]], axis=0)
y_database = np.append(y_database, y_new)
model = update_model(x_database, y_database)
plot_line(x_database, y_database, model)
add_to_database([x_new], [y_new])
# Close the connection to the database
conn.close()
if __name__ == "__main__":
main()
|
87338dea443774544289f7d9199d16a3
|
{
"intermediate": 0.4283621311187744,
"beginner": 0.40893805027008057,
"expert": 0.1626998484134674
}
|
48,286
|
Task 1: Basic understanding of the system
a) Obtain the unit step response of the given open-loop transfer function:
G(s) = (-0.0717s^3 + 1.684s^2 + 0.0853s + 0.0622) / (s^4 + 1.0604s^3 - 1.1154s^2 - 0.066s - 0.0512)
b) Find the poles and zeros of G(s) and check if it has any poles in the right-half plane (indicating instability).
Task 2: Making the system compatible for Bode plot based controller design
a) Use MATLAB to plot the Nyquist plot of G(s). From the Nyquist plot, determine a suitable value of Kf such that the effective inner loop transfer function G̃(s) = KfG(s)/(1+KfG(s)) is stable.
b) Plot the step response and pole-zero map of G̃(s) to verify its stability.
c) Check if the step response of G̃(s) meets any of the three design criteria given.
Task 3: Meeting the steady-state error criteria
a) Assuming C(s) = C1(s)C2(s), with C2(s) being a Type-0 system, find C1(s) such that the closed-loop system has zero steady-state error for a step input reference.
Task 4: Meeting the settling time and maximum overshoot criteria
a) Obtain the Bode plot of H̃(s) = -C1(s)G̃(s) and find its gain and phase margins.
b) Check if a simple proportional controller C2(s) = Kp can meet the settling time and overshoot specs.
c) If not, determine the structure of C2(s) using the Bode plot of H̃(s) such that it meets the specs with stability.
d) Tune the parameters of the selected C2(s) structure to meet the settling time (0.05 seconds) and maximum overshoot (20%) criteria.
Task 5: Simulate the closed-loop control system
a) Find the state-space model of the designed controller C(s) = -C1(s)C2(s).
b) Find the state-space model of the open-loop plant G(s) using tf2ss().
c) Combine the controller and plant models to get the closed-loop state-space model.
d) Write a MATLAB script simulator.m that simulates this closed-loop model for a given reference r(t) and disturbance ε(t) = A*sin(ωt) using ode45(). It should accept amplitude A, frequency ω, and initial conditions α(0), α_dot(0) as inputs and plot α(t).
Deliverables:
report.pdf - Document all steps, justifications, plots, derivations.
design_process.m - Relevant MATLAB codes for each task, well commented.
simulator.m - The simulation script as per Task 5(d) instructions.
|
c3dbd430e4cc5ac8afcfc75f6cc96c5f
|
{
"intermediate": 0.3264450430870056,
"beginner": 0.24995459616184235,
"expert": 0.423600435256958
}
|
48,287
|
What is the result of executing this code?
for (int lcv = 1; lcv <= 8; lcv)
{
}
System.out.println("lcv=" 1cv);
if (lcv == 8)
{
}
System.out.println("final value is 8");
else
{
}
System.out.println("final value is not 8");
The code compiles and displays nine lines of output with the last line of:
A. final value is not 8
B. final value is 8
C. The code contains compile time errors and no output is generated.
|
2da3987c3407d1abc7e3301a93a4b0a4
|
{
"intermediate": 0.33981242775917053,
"beginner": 0.46080467104911804,
"expert": 0.19938285648822784
}
|
48,288
|
A Person class is to be created that will include private instance
variables:
* first_name
* last_name
Which option correctly fills in the missing code?
A. Option 1
B. Option 3
Which of the following accessor methods fit this best?
C. Option 2
Option 1:
public double get_fname()
{
return first_name;
Option 2:
public String get_fname()
{
return first_name;
Option 3:
public void get_fname(String n)
{
return first_name;
|
9133f35f0b795eb2995a6409a0198e57
|
{
"intermediate": 0.3013378977775574,
"beginner": 0.5458221435546875,
"expert": 0.15283998847007751
}
|
48,289
|
Somehow, you did not realize your stack implementation uses Queue ADT. You did Queue using Stack as ADT (which internally uses Queue ADT). What is the complexity of Enqueue and Dequeue in this
Queue implementation?
|
78c10d2c81f658d4d91523d180892d11
|
{
"intermediate": 0.5532132387161255,
"beginner": 0.17598681151866913,
"expert": 0.2707999050617218
}
|
48,290
|
write me a github readme for application includes zookeeper, kafka broker, using tweets dataset, and data logging using flask
|
43dad46280d2223aaf60fea665a6ac14
|
{
"intermediate": 0.8624367713928223,
"beginner": 0.06534763425588608,
"expert": 0.07221561670303345
}
|
48,291
|
Show how AVL tree insertions will happen if I insert the following nodes in an empty AVL tree.
2, 12,4,1,7,8,5,3,6,11
|
d8e650772537452b8174cde2565fcab9
|
{
"intermediate": 0.28956761956214905,
"beginner": 0.1865370273590088,
"expert": 0.5238954424858093
}
|
48,292
|
in css is it possible to give a <p> a property which means the text will only display on mouseover
|
1bfb408026af1790cc0be81dab13e47f
|
{
"intermediate": 0.3752613365650177,
"beginner": 0.31199556589126587,
"expert": 0.3127431273460388
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.