row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
13,562
|
there is Mediaplan CustomDefinitionResource:
# CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: mediaplans.crd.idom.project
spec:
group: crds.idom.project
scope: Namespaced
names:
plural: mediaplans
singular: mediaplan
kind: Mediaplan
shortNames:
- mediap
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
apiVersion:
type: string
description: ‘A unique version for the Kubernetes API’
kind:
type: string
description: ‘Type of resource’
metadata:
type: object
properties:
name:
type: string
description: ‘Name of the resource’
required:
- name
spec:
type: object
properties:
# scheduling
schedule:
type: string
description: ‘Scheduling format for the mediaplan’
# channel metadata
channel:
type: object
properties:
name:
type: string
description: ‘Name of the channel’
guid:
type: string
description: ‘GUID of the channel’
id:
type: integer
description: ‘ID of the channel’
link:
type: string
description: ‘Link for the channel’
messagingSettings:
type: object
properties:
# oneof: topic or files
# generation topics
topics:
type: array
items:
type: object
properties:
code:
type: string
description: ‘Code for the topic’
weight:
type: integer
description: ‘Generation weight for the topic’
default: 1
# or choose files from s3
files:
type: array
items:
type: object
properties:
uri:
type: string
description: ‘URI for the file’
# agent settings
agentSettings:
type: object
properties:
# count
totalCount:
type: integer
description: ‘Total number of agents for the mediaplan’
default: 10
# credential comparison
credentials:
type: array
items:
type: object
properties:
key:
type: string
description: ‘Key for the credential’
value:
type: string
description: ‘Value for the credential’
# mimicry comparison
mimicry:
type: array
items:
type: object
properties:
key:
type: string
description: ‘Key for the mimicry option’
value:
type: string
description: ‘Value for the mimicry option’
# common mediaplan settings
commonSettings:
type: object
properties:
# output pkg generation coefficient
# define, how many packages will be generated for one incoming message
generationCoefficient:
type: number
description: ‘Output package generation coefficient’
default:
# total packages count
totalMessageCount:
type: integer
description: ‘Total number of messages for the mediaplan’
# estimated time for different messaging options
estimatedLeadTime:
type: string
description: ‘Estimated lead time for the mediaplan’
# disable mimicry options (for agents)
disableMimicryOptions:
type: boolean
description: ‘Disables mimicry options for agents’
# use image for packages
didImageUsedForGeneration:
type: boolean
description: ‘Indicates if an image was used for package generation’
please show a sample of a python web application, based on fastapi, kubernetes, opentelemetry with jaeger support, pydantic models, which serve CRUD operations for Mediaplan CustomResourceDefinition
|
240c52e03076c32f2d0b09802ea613b2
|
{
"intermediate": 0.36629629135131836,
"beginner": 0.3955528140068054,
"expert": 0.23815090954303741
}
|
13,563
|
there is Mediaplan CustomDefinitionResource:
# CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: mediaplans.crd.idom.project
spec:
group: crds.idom.project
scope: Namespaced
names:
plural: mediaplans
singular: mediaplan
kind: Mediaplan
shortNames:
- mediap
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
apiVersion:
type: string
description: ‘A unique version for the Kubernetes API’
kind:
type: string
description: ‘Type of resource’
metadata:
type: object
properties:
name:
type: string
description: ‘Name of the resource’
required:
- name
spec:
type: object
properties:
# scheduling
schedule:
type: string
description: ‘Scheduling format for the mediaplan’
# channel metadata
channel:
type: object
properties:
name:
type: string
description: ‘Name of the channel’
guid:
type: string
description: ‘GUID of the channel’
id:
type: integer
description: ‘ID of the channel’
link:
type: string
description: ‘Link for the channel’
messagingSettings:
type: object
properties:
# oneof: topic or files
# generation topics
topics:
type: array
items:
type: object
properties:
code:
type: string
description: ‘Code for the topic’
weight:
type: integer
description: ‘Generation weight for the topic’
default: 1
# or choose files from s3
files:
type: array
items:
type: object
properties:
uri:
type: string
description: ‘URI for the file’
# agent settings
agentSettings:
type: object
properties:
# count
totalCount:
type: integer
description: ‘Total number of agents for the mediaplan’
default: 10
# credential comparison
credentials:
type: array
items:
type: object
properties:
key:
type: string
description: ‘Key for the credential’
value:
type: string
description: ‘Value for the credential’
# mimicry comparison
mimicry:
type: array
items:
type: object
properties:
key:
type: string
description: ‘Key for the mimicry option’
value:
type: string
description: ‘Value for the mimicry option’
# common mediaplan settings
commonSettings:
type: object
properties:
# output pkg generation coefficient
# define, how many packages will be generated for one incoming message
generationCoefficient:
type: number
description: ‘Output package generation coefficient’
default:
# total packages count
totalMessageCount:
type: integer
description: ‘Total number of messages for the mediaplan’
# estimated time for different messaging options
estimatedLeadTime:
type: string
description: ‘Estimated lead time for the mediaplan’
# disable mimicry options (for agents)
disableMimicryOptions:
type: boolean
description: ‘Disables mimicry options for agents’
# use image for packages
didImageUsedForGeneration:
type: boolean
description: ‘Indicates if an image was used for package generation’
build a python web application, based on fastapi, kubernetes, opentelemetry with jaeger support, pydantic models, which serve CRUD operations for Mediaplan CustomResourceDefinition
|
d1aba1747dd7fea650ceab3476faa856
|
{
"intermediate": 0.36629629135131836,
"beginner": 0.3955528140068054,
"expert": 0.23815090954303741
}
|
13,564
|
there is Mediaplan CustomDefinitionResource:
# CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: mediaplans.crd.idom.project
spec:
group: crds.idom.project
scope: Namespaced
names:
plural: mediaplans
singular: mediaplan
kind: Mediaplan
shortNames:
- mediap
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
apiVersion:
type: string
description: ‘A unique version for the Kubernetes API’
kind:
type: string
description: ‘Type of resource’
metadata:
type: object
properties:
name:
type: string
description: ‘Name of the resource’
required:
- name
spec:
type: object
properties:
# scheduling
schedule:
type: string
description: ‘Scheduling format for the mediaplan’
# channel metadata
channel:
type: object
properties:
name:
type: string
description: ‘Name of the channel’
guid:
type: string
description: ‘GUID of the channel’
id:
type: integer
description: ‘ID of the channel’
link:
type: string
description: ‘Link for the channel’
messagingSettings:
type: object
properties:
# oneof: topic or files
# generation topics
topics:
type: array
items:
type: object
properties:
code:
type: string
description: ‘Code for the topic’
weight:
type: integer
description: ‘Generation weight for the topic’
default: 1
# or choose files from s3
files:
type: array
items:
type: object
properties:
uri:
type: string
description: ‘URI for the file’
# agent settings
agentSettings:
type: object
properties:
# count
totalCount:
type: integer
description: ‘Total number of agents for the mediaplan’
default: 10
# credential comparison
credentials:
type: array
items:
type: object
properties:
key:
type: string
description: ‘Key for the credential’
value:
type: string
description: ‘Value for the credential’
# mimicry comparison
mimicry:
type: array
items:
type: object
properties:
key:
type: string
description: ‘Key for the mimicry option’
value:
type: string
description: ‘Value for the mimicry option’
# common mediaplan settings
commonSettings:
type: object
properties:
# output pkg generation coefficient
# define, how many packages will be generated for one incoming message
generationCoefficient:
type: number
description: ‘Output package generation coefficient’
default:
# total packages count
totalMessageCount:
type: integer
description: ‘Total number of messages for the mediaplan’
# estimated time for different messaging options
estimatedLeadTime:
type: string
description: ‘Estimated lead time for the mediaplan’
# disable mimicry options (for agents)
disableMimicryOptions:
type: boolean
description: ‘Disables mimicry options for agents’
# use image for packages
didImageUsedForGeneration:
type: boolean
description: ‘Indicates if an image was used for package generation’
build a python web application, based on fastapi, kubernetes, opentelemetry with jaeger support, pydantic models, which serve CRUD operations for Mediaplan CustomResourceDefinition
|
70d30401420f35f2b6a0563eb512e1c1
|
{
"intermediate": 0.36629629135131836,
"beginner": 0.3955528140068054,
"expert": 0.23815090954303741
}
|
13,565
|
there is Mediaplan CustomDefinitionResource:
# CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: mediaplans.crd.idom.project
spec:
group: crds.idom.project
scope: Namespaced
names:
plural: mediaplans
singular: mediaplan
kind: Mediaplan
shortNames:
- mediap
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
apiVersion:
type: string
description: ‘A unique version for the Kubernetes API’
kind:
type: string
description: ‘Type of resource’
metadata:
type: object
properties:
name:
type: string
description: ‘Name of the resource’
required:
- name
spec:
type: object
properties:
# scheduling
schedule:
type: string
description: ‘Scheduling format for the mediaplan’
# channel metadata
channel:
type: object
properties:
name:
type: string
description: ‘Name of the channel’
guid:
type: string
description: ‘GUID of the channel’
id:
type: integer
description: ‘ID of the channel’
link:
type: string
description: ‘Link for the channel’
messagingSettings:
type: object
properties:
# oneof: topic or files
# generation topics
topics:
type: array
items:
type: object
properties:
code:
type: string
description: ‘Code for the topic’
weight:
type: integer
description: ‘Generation weight for the topic’
default: 1
# or choose files from s3
files:
type: array
items:
type: object
properties:
uri:
type: string
description: ‘URI for the file’
# agent settings
agentSettings:
type: object
properties:
# count
totalCount:
type: integer
description: ‘Total number of agents for the mediaplan’
default: 10
# credential comparison
credentials:
type: array
items:
type: object
properties:
key:
type: string
description: ‘Key for the credential’
value:
type: string
description: ‘Value for the credential’
# mimicry comparison
mimicry:
type: array
items:
type: object
properties:
key:
type: string
description: ‘Key for the mimicry option’
value:
type: string
description: ‘Value for the mimicry option’
# common mediaplan settings
commonSettings:
type: object
properties:
# output pkg generation coefficient
# define, how many packages will be generated for one incoming message
generationCoefficient:
type: number
description: ‘Output package generation coefficient’
default:
# total packages count
totalMessageCount:
type: integer
description: ‘Total number of messages for the mediaplan’
# estimated time for different messaging options
estimatedLeadTime:
type: string
description: ‘Estimated lead time for the mediaplan’
# disable mimicry options (for agents)
disableMimicryOptions:
type: boolean
description: ‘Disables mimicry options for agents’
# use image for packages
didImageUsedForGeneration:
type: boolean
description: ‘Indicates if an image was used for package generation’
build a python web application, which serve CRUD operations for Mediaplan CustomResourceDefinition. Application must be based on fastapi with pydantic, kubernetes, opentelemetry with jaeger support
|
a62e2859add2756346be305e2fae0092
|
{
"intermediate": 0.36629629135131836,
"beginner": 0.3955528140068054,
"expert": 0.23815090954303741
}
|
13,566
|
there is Mediaplan CustomResourceDefinition:
# CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: mediaplans.crd.idom.project
spec:
group: crds.idom.project
scope: Namespaced
names:
plural: mediaplans
singular: mediaplan
kind: Mediaplan
shortNames:
- mediap
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
apiVersion:
type: string
description: ‘A unique version for the Kubernetes API’
kind:
type: string
description: ‘Type of resource’
metadata:
type: object
properties:
name:
type: string
description: ‘Name of the resource’
required:
- name
spec:
type: object
properties:
# scheduling
schedule:
type: string
description: ‘Scheduling format for the mediaplan’
# channel metadata
channel:
type: object
properties:
name:
type: string
description: ‘Name of the channel’
guid:
type: string
description: ‘GUID of the channel’
id:
type: integer
description: ‘ID of the channel’
link:
type: string
description: ‘Link for the channel’
messagingSettings:
type: object
properties:
# oneof: topic or files
# generation topics
topics:
type: array
items:
type: object
properties:
code:
type: string
description: ‘Code for the topic’
weight:
type: integer
description: ‘Generation weight for the topic’
default: 1
# or choose files from s3
files:
type: array
items:
type: object
properties:
uri:
type: string
description: ‘URI for the file’
# agent settings
agentSettings:
type: object
properties:
# count
totalCount:
type: integer
description: ‘Total number of agents for the mediaplan’
default: 10
# credential comparison
credentials:
type: array
items:
type: object
properties:
key:
type: string
description: ‘Key for the credential’
value:
type: string
description: ‘Value for the credential’
# mimicry comparison
mimicry:
type: array
items:
type: object
properties:
key:
type: string
description: ‘Key for the mimicry option’
value:
type: string
description: ‘Value for the mimicry option’
# common mediaplan settings
commonSettings:
type: object
properties:
# output pkg generation coefficient
# define, how many packages will be generated for one incoming message
generationCoefficient:
type: number
description: ‘Output package generation coefficient’
default:
# total packages count
totalMessageCount:
type: integer
description: ‘Total number of messages for the mediaplan’
# estimated time for different messaging options
estimatedLeadTime:
type: string
description: ‘Estimated lead time for the mediaplan’
# disable mimicry options (for agents)
disableMimicryOptions:
type: boolean
description: ‘Disables mimicry options for agents’
# use image for packages
didImageUsedForGeneration:
type: boolean
description: ‘Indicates if an image was used for package generation’
create an pydantic model of this CustomResourceDefinition
|
6b98a78b60a5c910a62b3d1f725a43c5
|
{
"intermediate": 0.3648686110973358,
"beginner": 0.40044134855270386,
"expert": 0.23469001054763794
}
|
13,567
|
I used your signal_generator code: def signal_generator(df):
# Calculate EMA and MA lines
df['EMA5'] = df['Close'].ewm(span=5, adjust=False).mean()
df['EMA10'] = df['Close'].ewm(span=10, adjust=False).mean()
df['EMA20'] = df['Close'].ewm(span=20, adjust=False).mean()
df['EMA50'] = df['Close'].ewm(span=50, adjust=False).mean()
df['EMA100'] = df['Close'].ewm(span=100, adjust=False).mean()
df['EMA200'] = df['Close'].ewm(span=200, adjust=False).mean()
df['MA10'] = df['Close'].rolling(window=10).mean()
df['MA20'] = df['Close'].rolling(window=20).mean()
df['MA50'] = df['Close'].rolling(window=50).mean()
df['MA100'] = df['Close'].rolling(window=100).mean()
open = df.Open.iloc[-1]
close = df.Close.iloc[-1]
previous_open = df.Open.iloc[-2]
previous_close = df.Close.iloc[-2]
# Calculate the last candlestick
# Calculate EMA and MA lines
last_candle = df.iloc[-1]
current_price = df.Close.iloc[-1]
ema_analysis = ''
candle_analysis = ''
# EMA crossover - buy signal
if df.EMA10.iloc[-1] > df.EMA50.iloc[-1] and current_price > last_candle[['EMA10', 'EMA50']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA10.iloc[-1] < df.EMA50.iloc[-1] and current_price < last_candle[['EMA10', 'EMA50']].iloc[-1].max():
ema_analysis = 'sell'
# EMA crossover - buy signal
if df.EMA20.iloc[-1] > df.EMA200.iloc[-1] and current_price > last_candle[['EMA20', 'EMA200']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA20.iloc[-1] < df.EMA200.iloc[-1] and current_price < last_candle[['EMA20', 'EMA200']].iloc[-1].max():
ema_analysis = 'sell'
# Check for bullish trends
elif current_price > last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].max():
ema_analysis = 'buy'
# Check for bearish trends
elif current_price < last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].min():
ema_analysis = 'sell'
# Check for bullish candlestick pattern
if (open<close and
previous_open>previous_close and
close>previous_open and
open<=previous_close):
candle_analysis = 'buy'
# Check for bearish candlestick pattern
elif (open>close and
previous_open<previous_close and
close<previous_open and
open>=previous_close):
candle_analysis = 'sell'
# Check if both analyses are the same
if ema_analysis == candle_analysis == ema_analysis != '':
return ema_analysis
# If no signal is found, return an empty string
return ''
But it giveing me wrong signals , please can you add in my code good MACD strategy please
|
0ab837007269cc148c89191cd681fbc6
|
{
"intermediate": 0.31000545620918274,
"beginner": 0.36984437704086304,
"expert": 0.3201501667499542
}
|
13,568
|
yo
|
3f3d72f968381b3e40cfd8048709a943
|
{
"intermediate": 0.34756919741630554,
"beginner": 0.2755926549434662,
"expert": 0.37683817744255066
}
|
13,569
|
grid-template-rows: repeat(2, minmax(1fr, auto))
|
4863ea59832193b54991dd53e4dc2b8a
|
{
"intermediate": 0.30153688788414,
"beginner": 0.30919599533081055,
"expert": 0.38926711678504944
}
|
13,570
|
import shodan
except ImportError:
print "Shodan library not found. Please install it prior to running script"
SHODAN_API_KEY = "wkZgVrHufey0ZvcE27V0I0UmfiihAYHs"
api = shodan.Shodan(SHODAN_API_KEY)try:
results = api.search('MongoDB')
except shodan.APIError, error:
print 'Error: {0}'.format(error)
fix my code
|
b4dd3bbfc1a551a9949d1499077f0e09
|
{
"intermediate": 0.5491535663604736,
"beginner": 0.20923028886318207,
"expert": 0.2416161745786667
}
|
13,571
|
I want to create a 9-step conversation in Python-Telegram-Bot. Please give the code
|
fd4e9d1ed8394c58e5931105a1bc742b
|
{
"intermediate": 0.38308537006378174,
"beginner": 0.2370709776878357,
"expert": 0.37984368205070496
}
|
13,572
|
hi
|
83cd387c8bd5bab22cff419eefe60237
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
13,573
|
what is the vba code to perform word wrap
|
7daec11c44c094a58a4530fa6a5d6d7d
|
{
"intermediate": 0.3093564808368683,
"beginner": 0.3314574658870697,
"expert": 0.3591860234737396
}
|
13,574
|
bu ekranda asagida bir dock yaptim kodlari su sekilde : 'import React, { useRef } from "react";
import {
StyleSheet,
Text,
View,
FlatList,
TouchableOpacity,
Image,
ScrollView,
Animated,
} from "react-native";
import TitleMainScreen from "../components/TitleMainScreen";
import { useFonts } from "expo-font";
import ActivitiesData from "../assets/data/ActivitiesData";
import Activities from "../components/Activities";
import Posts from "../components/Posts";
import { Ionicons, MaterialCommunityIcons } from "@expo/vector-icons";
import PostsData from "../assets/data/PostsData";
interface MeetMateMainScreenProps {
navigation: any;
}
const MeetMateMainScreen: React.FC<MeetMateMainScreenProps> = ({
navigation,
}) => {
const [fontsLoaded] = useFonts({
"Alatsi-Regular": require("../assets/fonts/Alatsi-Regular.ttf"),
"Alata-Regular": require("../assets/fonts/Alata-Regular.ttf"),
});
const scrollY = useRef(new Animated.Value(0)).current;
if (!fontsLoaded) {
return null;
}
interface RenderActivitiesProps {
item: any;
index: number;
}
const RenderActivities: React.FC<RenderActivitiesProps> = ({
item,
index,
}) => {
return <Activities item={item} index={index} />;
};
return (
<View style={{ flex: 1 }}>
<TitleMainScreen navigation={navigation} />
<ScrollView
showsVerticalScrollIndicator={false}
>
<Text
style={{
fontSize: 20,
fontFamily: "Alata-Regular",
marginLeft: 20,
}}
>
Activities
</Text>
<FlatList
data={ActivitiesData}
renderItem={({ item, index }) => (
<RenderActivities item={item} index={index} />
)}
keyExtractor={(item) => item.id}
horizontal={true}
showsHorizontalScrollIndicator={false}
/>
<FlatList
data={PostsData}
renderItem={({ item, index }) => (
<Posts item={item} index={index} navigation={navigation} />
)}
keyExtractor={(item) => item.id}
showsVerticalScrollIndicator={false}
/>
</ScrollView>
<View
style={{
position: "absolute",
bottom: 30,
left: 0,
right: 0,
height: 60,
marginHorizontal: 20,
borderRadius: 30,
backgroundColor: "black",
shadowColor: "#000",
shadowOffset: {
width: 0,
height: 0,
},
shadowOpacity: 0.5,
shadowRadius: 3.84,
elevation: 5,
}}
>
<View
style={{
flexDirection: "row",
justifyContent: "space-between",
alignItems: "center",
marginHorizontal: 20,
}}
>
<Ionicons name={"home"} size={30} color={"white"} />
<Ionicons name={"search"} size={30} color={"white"} />
<Ionicons name={"add-circle-outline"} size={50} color={"white"} />
<MaterialCommunityIcons
name={"account-group-outline"}
size={30}
color={"white"}
/>
<Image
source={require("../assets/images/postimage1.jpeg")}
resizeMode="cover"
style={{ height: 30, width: 30, borderRadius: 30 }}
/>
</View>
</View>
</View>
);
};
export default MeetMateMainScreen;
const styles = StyleSheet.create({});
' dock kismi ise surasi : '<View
style={{
position: "absolute",
bottom: 30,
left: 0,
right: 0,
height: 60,
marginHorizontal: 20,
borderRadius: 30,
backgroundColor: "black",
shadowColor: "#000",
shadowOffset: {
width: 0,
height: 0,
},
shadowOpacity: 0.5,
shadowRadius: 3.84,
elevation: 5,
}}
>
<View
style={{
flexDirection: "row",
justifyContent: "space-between",
alignItems: "center",
marginHorizontal: 20,
}}
>
<Ionicons name={"home"} size={30} color={"white"} />
<Ionicons name={"search"} size={30} color={"white"} />
<Ionicons name={"add-circle-outline"} size={50} color={"white"} />
<MaterialCommunityIcons
name={"account-group-outline"}
size={30}
color={"white"}
/>
<Image
source={require("../assets/images/postimage1.jpeg")}
resizeMode="cover"
style={{ height: 30, width: 30, borderRadius: 30 }}
/>
</View>
</View>' bu dock kisminin ekran kaydirilirken gizlenmesini istiyorum
|
e2971c3fdc22e9e74a2d7666f53c7158
|
{
"intermediate": 0.47305718064308167,
"beginner": 0.3937141001224518,
"expert": 0.13322870433330536
}
|
13,575
|
write me a python function for Root Mean Squared Log Error
|
e539af046419728018309642c26550f8
|
{
"intermediate": 0.3924665153026581,
"beginner": 0.2614913284778595,
"expert": 0.3460421860218048
}
|
13,576
|
whitch version of elasticsearch can liferay7.1 run
|
d13bc9be4f339efd3a14b5d5deba415b
|
{
"intermediate": 0.33728131651878357,
"beginner": 0.1694096475839615,
"expert": 0.4933090806007385
}
|
13,577
|
How to start a specific minecraft java 1.18.1 client in online mode through python
|
45103a3d1e8835f14eb515cc3d5b97ca
|
{
"intermediate": 0.4998253583908081,
"beginner": 0.26772284507751465,
"expert": 0.23245179653167725
}
|
13,578
|
hey
|
ed97706a6f5d4d4e6c27c1426af185a6
|
{
"intermediate": 0.33180856704711914,
"beginner": 0.2916048467159271,
"expert": 0.3765866458415985
}
|
13,579
|
make a python script that can ask questions to you
|
2472daceb872c75ffd583548215812e7
|
{
"intermediate": 0.3632524311542511,
"beginner": 0.26433125138282776,
"expert": 0.3724163770675659
}
|
13,580
|
hello gpt 3.5 how're you?
|
a8595e125385b620c1e29a513d9ac8f2
|
{
"intermediate": 0.3753810524940491,
"beginner": 0.3272165060043335,
"expert": 0.2974023222923279
}
|
13,581
|
Напиши триггер для MS SQL, который будет сохранять в отдельной таблице историю событий (insert, update, delete), связанных с подготовкой самолета к рейсу, а именно, дату и время, когда была проведена подготовка, самолет, какая конкретно подготовка была проведена, а также лица ответственные за подготовку. Вот мои таблицы:
1. Таблица “Employees”:
- employee_id
- first_name
- last_name
- birth_date
- position
- department
2. Таблица “Aircraft”:
- aircraft_id
- aircraft_type
- aircraft_status
3. Таблица “Crews”:
- crew_id
- employee_id
- aircraft_id
4. Таблица “Schedule”:
- flight_id
- aircraft_id
- flight_code
- flight_day
- departure_time
- arrival_time
- departure_city
- arrival_city
- distance
- price
- tickets_sold
5. Таблица “Maintenance”:
- maintenance_id
- aircraft_id
- maintenance_date
- required_work
6. Таблица “Flight_preparation”:
- preparation_id
- aircraft_id
- preparation_date
- completed_work
- food_supply
|
7a277131183751f499685289e7277a6a
|
{
"intermediate": 0.2824048697948456,
"beginner": 0.44483745098114014,
"expert": 0.27275770902633667
}
|
13,582
|
How to get the task id which is success in zuora workflow ?
|
aac681ab9c18b35eb96afb7d521ab507
|
{
"intermediate": 0.4854467213153839,
"beginner": 0.13763417303562164,
"expert": 0.376919150352478
}
|
13,583
|
I used your signal_generator code: def signal_generator(df):
# Calculate EMA and MA lines
df['EMA5'] = df['Close'].ewm(span=5, adjust=False).mean()
df['EMA10'] = df['Close'].ewm(span=10, adjust=False).mean()
df['EMA20'] = df['Close'].ewm(span=20, adjust=False).mean()
df['EMA50'] = df['Close'].ewm(span=50, adjust=False).mean()
df['EMA100'] = df['Close'].ewm(span=100, adjust=False).mean()
df['EMA200'] = df['Close'].ewm(span=200, adjust=False).mean()
df['MA10'] = df['Close'].rolling(window=10).mean()
df['MA20'] = df['Close'].rolling(window=20).mean()
df['MA50'] = df['Close'].rolling(window=50).mean()
df['MA100'] = df['Close'].rolling(window=100).mean()
open = df.Open.iloc[-1]
close = df.Close.iloc[-1]
previous_open = df.Open.iloc[-2]
previous_close = df.Close.iloc[-2]
# Calculate the last candlestick
# Calculate EMA and MA lines
last_candle = df.iloc[-1]
current_price = df.Close.iloc[-1]
ema_analysis = ''
candle_analysis = ''
# EMA crossover - buy signal
if df.EMA10.iloc[-1] > df.EMA50.iloc[-1] and current_price > last_candle[['EMA10', 'EMA50']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA10.iloc[-1] < df.EMA50.iloc[-1] and current_price < last_candle[['EMA10', 'EMA50']].iloc[-1].max():
ema_analysis = 'sell'
# EMA crossover - buy signal
if df.EMA20.iloc[-1] > df.EMA200.iloc[-1] and current_price > last_candle[['EMA20', 'EMA200']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA20.iloc[-1] < df.EMA200.iloc[-1] and current_price < last_candle[['EMA20', 'EMA200']].iloc[-1].max():
ema_analysis = 'sell'
# Check for bullish trends
elif current_price > last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].max():
ema_analysis = 'buy'
# Check for bearish trends
elif current_price < last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].min():
ema_analysis = 'sell'
# Check for bullish candlestick pattern
if (open<close and
previous_open>previous_close and
close>previous_open and
open<=previous_close):
candle_analysis = 'buy'
# Check for bearish candlestick pattern
elif (open>close and
previous_open<previous_close and
close<previous_open and
open>=previous_close):
candle_analysis = 'sell'
# Check if both analyses are the same
if ema_analysis == candle_analysis and ema_analysis != '':
if ema_analysis == 'buy':
return 'buy'
elif ema_analysis == 'sell':
return 'sell'
# If no signal is found, check for EMA only
if ema_analysis != '':
if ema_analysis == 'buy':
return 'buy'
elif ema_analysis == 'sell':
return 'sell'
# If EMA analysis is empty, check for candlestick analysis only
if candle_analysis != '':
if candle_analysis == 'buy':
return 'buy'
elif candle_analysis == 'sell':
return 'sell'
# If no signal is found, return an empty string
return ''
But every time it giveing me signal to buy, what I need to change in my code ?
|
fdd496e8b5f8dfe765249340d56970d8
|
{
"intermediate": 0.31000545620918274,
"beginner": 0.36984437704086304,
"expert": 0.3201501667499542
}
|
13,584
|
I used this signal_generator code: def signal_generator(df):
# Calculate EMA and MA lines
df['EMA5'] = df['Close'].ewm(span=5, adjust=False).mean()
df['EMA10'] = df['Close'].ewm(span=10, adjust=False).mean()
df['EMA20'] = df['Close'].ewm(span=20, adjust=False).mean()
df['EMA50'] = df['Close'].ewm(span=50, adjust=False).mean()
df['EMA100'] = df['Close'].ewm(span=100, adjust=False).mean()
df['EMA200'] = df['Close'].ewm(span=200, adjust=False).mean()
df['MA10'] = df['Close'].rolling(window=10).mean()
df['MA20'] = df['Close'].rolling(window=20).mean()
df['MA50'] = df['Close'].rolling(window=50).mean()
df['MA100'] = df['Close'].rolling(window=100).mean()
open = df.Open.iloc[-1]
close = df.Close.iloc[-1]
previous_open = df.Open.iloc[-2]
previous_close = df.Close.iloc[-2]
# Calculate the last candlestick
# Calculate EMA and MA lines
last_candle = df.iloc[-1]
current_price = df.Close.iloc[-1]
ema_analysis = ''
candle_analysis = ''
# EMA crossover - buy signal
if df.EMA10.iloc[-1] > df.EMA50.iloc[-1] and current_price > last_candle[['EMA10', 'EMA50']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA10.iloc[-1] < df.EMA50.iloc[-1] and current_price < last_candle[['EMA10', 'EMA50']].iloc[-1].max():
ema_analysis = 'sell'
# EMA crossover - buy signal
if df.EMA20.iloc[-1] > df.EMA200.iloc[-1] and current_price > last_candle[['EMA20', 'EMA200']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA20.iloc[-1] < df.EMA200.iloc[-1] and current_price < last_candle[['EMA20', 'EMA200']].iloc[-1].max():
ema_analysis = 'sell'
# Check for bullish trends
elif current_price > last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].max():
ema_analysis = 'buy'
# Check for bearish trends
elif current_price < last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].min():
ema_analysis = 'sell'
# Check for bullish candlestick pattern
if (open<close and
previous_open>previous_close and
close>previous_open and
open<=previous_close):
candle_analysis = 'buy'
# Check for bearish candlestick pattern
elif (open>close and
previous_open<previous_close and
close<previous_open and
open>=previous_close):
candle_analysis = 'sell'
# Check if both analyses are the same
if ema_analysis == candle_analysis and ema_analysis != '':
if ema_analysis == 'buy':
return 'buy'
elif ema_analysis == 'sell':
return 'sell'
# If no signal is found, check for EMA only
if ema_analysis != '':
if ema_analysis == 'buy':
return 'buy'
elif ema_analysis == 'sell':
return 'sell'
# If EMA analysis is empty, check for candlestick analysis only
if candle_analysis != '':
if candle_analysis == 'buy':
return 'buy'
elif candle_analysis == 'sell':
return 'sell'
# If no signal is found, return an empty string
return ''
But it giveing me only buy signal and every minute I think its an ERROR, what I need to cahnge in my code ?
|
eaed47a7b44d0f6e83d6d4463a2cda8e
|
{
"intermediate": 0.3069511353969574,
"beginner": 0.37201905250549316,
"expert": 0.3210298717021942
}
|
13,585
|
Could I masked a Google form in html registration form using datepeeker to choose a date and hour for an exercise?
|
680c7386bc15be3bb89b9dc700ad9ff7
|
{
"intermediate": 0.44830241799354553,
"beginner": 0.18278056383132935,
"expert": 0.36891698837280273
}
|
13,586
|
I am runing XGboost and got the next error: ValueError: Invalid classes inferred from unique values of `y`. Expected: [0 1 2 3], got [1 3 4 5]. how to fix?
|
406234965144ac8a9a259340858d70df
|
{
"intermediate": 0.469371497631073,
"beginner": 0.22148440778255463,
"expert": 0.3091440796852112
}
|
13,587
|
I want to create a 9-step conversation in Python-Telegram-Bot. In the first step, a photo is taken.
|
bc8e4509350850b57262cd1744ce73c2
|
{
"intermediate": 0.38078024983406067,
"beginner": 0.284291535615921,
"expert": 0.3349282443523407
}
|
13,588
|
Code me ai npc chat bots for video game
|
9a910eb3473e720097f3dfe10dcae077
|
{
"intermediate": 0.2224229872226715,
"beginner": 0.40516233444213867,
"expert": 0.3724147081375122
}
|
13,589
|
Напиши триггер, который будет сохранять в отдельной таблице историю событий (insert, update, delete), связанных с подготовкой самолета к рейсу, а именно, дату и время, когда была проведена подготовка, самолет, какая конкретно подготовка техническая и обслуживающая, а также лица ответственные за подготовку. Вот таблицы:
1. Workers:
id,
full_name,
position,
department
2.Planes:
id,
plane_type,
status
3.Crews:
id,
pilot_id,
technician_id,
staff_id,
flight_id
4.Flights:
id,
plane_id,
flight_number,
departure_date,
departure_time,
arrival_time,
departure_location,
arrival_location,
distance,
ticket_price,
sold_tickets
5.Flight_Preparation:
id,
plane_id,
preparation_datetime,
technical_preparation,
servicing_preparation,
responsible_workers_id
|
72ad97c2a207dd080a3afea33873c55b
|
{
"intermediate": 0.3213450014591217,
"beginner": 0.46050789952278137,
"expert": 0.21814709901809692
}
|
13,590
|
I used this signal_generator code: def signal_generator(df):
# Calculate EMA and MA lines
df['EMA5'] = df['Close'].ewm(span=5, adjust=False).mean()
df['EMA10'] = df['Close'].ewm(span=10, adjust=False).mean()
df['EMA20'] = df['Close'].ewm(span=20, adjust=False).mean()
df['EMA50'] = df['Close'].ewm(span=50, adjust=False).mean()
df['EMA100'] = df['Close'].ewm(span=100, adjust=False).mean()
df['EMA200'] = df['Close'].ewm(span=200, adjust=False).mean()
df['MA10'] = df['Close'].rolling(window=10).mean()
df['MA20'] = df['Close'].rolling(window=20).mean()
df['MA50'] = df['Close'].rolling(window=50).mean()
df['MA100'] = df['Close'].rolling(window=100).mean()
open = df.Open.iloc[-1]
close = df.Close.iloc[-1]
previous_open = df.Open.iloc[-2]
previous_close = df.Close.iloc[-2]
# Calculate the last candlestick
# Calculate EMA and MA lines
last_candle = df.iloc[-1]
current_price = df.Close.iloc[-1]
ema_analysis = ''
candle_analysis = ''
# EMA crossover - buy signal
if df.EMA10.iloc[-1] > df.EMA50.iloc[-1] and current_price > last_candle[['EMA10', 'EMA50']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA10.iloc[-1] < df.EMA50.iloc[-1] and current_price < last_candle[['EMA10', 'EMA50']].iloc[-1].max():
ema_analysis = 'sell'
# EMA crossover - buy signal
if df.EMA20.iloc[-1] > df.EMA200.iloc[-1] and current_price > last_candle[['EMA20', 'EMA200']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA20.iloc[-1] < df.EMA200.iloc[-1] and current_price < last_candle[['EMA20', 'EMA200']].iloc[-1].max():
ema_analysis = 'sell'
# Check for bullish trends
elif current_price > last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].max():
ema_analysis = 'buy'
# Check for bearish trends
elif current_price < last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].min():
ema_analysis = 'sell'
# Check for bullish candlestick pattern
if (open<close and
previous_open>previous_close and
close>previous_open and
open<=previous_close):
candle_analysis = 'buy'
# Check for bearish candlestick pattern
elif (open>close and
previous_open<previous_close and
close<previous_open and
open>=previous_close):
candle_analysis = 'sell'
# Check if both analyses are the same
if ema_analysis == candle_analysis and ema_analysis != '':
if ema_analysis == 'buy':
return 'buy'
elif ema_analysis == 'sell':
return 'sell'
# If no signal is found, check for EMA only
if ema_analysis != '':
if ema_analysis == 'buy':
return 'buy'
elif ema_analysis == 'sell':
return 'sell'
# If EMA analysis is empty, check for candlestick analysis only
if candle_analysis != '':
if candle_analysis == 'buy':
return 'buy'
elif candle_analysis == 'sell':
return 'sell'
# If no signal is found, return an empty string
return ''
But it returning me signal to buy every time , can you change this code and giveme right code
|
418c48152c99ff43408479678de47a32
|
{
"intermediate": 0.3069511353969574,
"beginner": 0.37201905250549316,
"expert": 0.3210298717021942
}
|
13,591
|
I used this signal_generator code: def signal_generator(df):
# Calculate EMA and MA lines
df['EMA5'] = df['Close'].ewm(span=5, adjust=False).mean()
df['EMA10'] = df['Close'].ewm(span=10, adjust=False).mean()
df['EMA20'] = df['Close'].ewm(span=20, adjust=False).mean()
df['EMA50'] = df['Close'].ewm(span=50, adjust=False).mean()
df['EMA100'] = df['Close'].ewm(span=100, adjust=False).mean()
df['EMA200'] = df['Close'].ewm(span=200, adjust=False).mean()
df['MA10'] = df['Close'].rolling(window=10).mean()
df['MA20'] = df['Close'].rolling(window=20).mean()
df['MA50'] = df['Close'].rolling(window=50).mean()
df['MA100'] = df['Close'].rolling(window=100).mean()
open = df.Open.iloc[-1]
close = df.Close.iloc[-1]
previous_open = df.Open.iloc[-2]
previous_close = df.Close.iloc[-2]
# Calculate the last candlestick
# Calculate EMA and MA lines
last_candle = df.iloc[-1]
current_price = df.Close.iloc[-1]
ema_analysis = ''
candle_analysis = ''
# EMA crossover - buy signal
if df.EMA10.iloc[-1] > df.EMA50.iloc[-1] and current_price > last_candle[['EMA10', 'EMA50']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA10.iloc[-1] < df.EMA50.iloc[-1] and current_price < last_candle[['EMA10', 'EMA50']].iloc[-1].max():
ema_analysis = 'sell'
# EMA crossover - buy signal
if df.EMA20.iloc[-1] > df.EMA200.iloc[-1] and current_price > last_candle[['EMA20', 'EMA200']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA20.iloc[-1] < df.EMA200.iloc[-1] and current_price < last_candle[['EMA20', 'EMA200']].iloc[-1].max():
ema_analysis = 'sell'
# Check for bullish trends
elif current_price > last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].max():
ema_analysis = 'buy'
# Check for bearish trends
elif current_price < last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].min():
ema_analysis = 'sell'
# Check for bullish candlestick pattern
if (open<close and
previous_open>previous_close and
close>previous_open and
open<=previous_close):
candle_analysis = 'buy'
# Check for bearish candlestick pattern
elif (open>close and
previous_open<previous_close and
close<previous_open and
open>=previous_close):
candle_analysis = 'sell'
# Check if both analyses are the same
if ema_analysis == candle_analysis and ema_analysis != '':
if ema_analysis == 'buy':
return 'buy'
elif ema_analysis == 'sell':
return 'sell'
# If no signal is found, check for EMA only
if ema_analysis != '':
if ema_analysis == 'buy':
return 'buy'
elif ema_analysis == 'sell':
return 'sell'
# If EMA analysis is empty, check for candlestick analysis only
if candle_analysis != '':
if candle_analysis == 'buy':
return 'buy'
elif candle_analysis == 'sell':
return 'sell'
# If no signal is found, return an empty string
return ''
But it returns me signal to buy every time , please to solve this problem change those lines: # Check if both analyses are the same
if ema_analysis == candle_analysis and ema_analysis != '':
if ema_analysis == 'buy':
return 'buy'
elif ema_analysis == 'sell':
return 'sell'
# If no signal is found, check for EMA only
if ema_analysis != '':
if ema_analysis == 'buy':
return 'buy'
elif ema_analysis == 'sell':
return 'sell'
# If EMA analysis is empty, check for candlestick analysis only
if candle_analysis != '':
if candle_analysis == 'buy':
return 'buy'
elif candle_analysis == 'sell':
return 'sell'
# If no signal is found, return an empty string
return ''
|
f75fc3319080f836e3b6f6941f9af19a
|
{
"intermediate": 0.3069511353969574,
"beginner": 0.37201905250549316,
"expert": 0.3210298717021942
}
|
13,592
|
how to tell cargo to optimize for size
|
c1db3e0df76bc432ce7367b3a189036b
|
{
"intermediate": 0.2706149220466614,
"beginner": 0.1678028106689453,
"expert": 0.5615822076797485
}
|
13,593
|
I used this signal_generator code:
def signal_generator(df):
# Calculate EMA and MA lines
df['EMA5'] = df['Close'].ewm(span=5, adjust=False).mean()
df['EMA10'] = df['Close'].ewm(span=10, adjust=False).mean()
df['EMA20'] = df['Close'].ewm(span=20, adjust=False).mean()
df['EMA50'] = df['Close'].ewm(span=50, adjust=False).mean()
df['EMA100'] = df['Close'].ewm(span=100, adjust=False).mean()
df['EMA200'] = df['Close'].ewm(span=200, adjust=False).mean()
df['MA10'] = df['Close'].rolling(window=10).mean()
df['MA20'] = df['Close'].rolling(window=20).mean()
df['MA50'] = df['Close'].rolling(window=50).mean()
df['MA100'] = df['Close'].rolling(window=100).mean()
open = df.Open.iloc[-1]
close = df.Close.iloc[-1]
previous_open = df.Open.iloc[-2]
previous_close = df.Close.iloc[-2]
# Calculate the last candlestick
# Calculate EMA and MA lines
last_candle = df.iloc[-1]
current_price = df.Close.iloc[-1]
ema_analysis = ''
candle_analysis = ''
# EMA crossover - buy signal
if df.EMA10.iloc[-1] > df.EMA50.iloc[-1] and current_price > last_candle[['EMA10', 'EMA50']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA10.iloc[-1] < df.EMA50.iloc[-1] and current_price < last_candle[['EMA10', 'EMA50']].iloc[-1].max():
ema_analysis = 'sell'
# EMA crossover - buy signal
if df.EMA20.iloc[-1] > df.EMA200.iloc[-1] and current_price > last_candle[['EMA20', 'EMA200']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA20.iloc[-1] < df.EMA200.iloc[-1] and current_price < last_candle[['EMA20', 'EMA200']].iloc[-1].max():
ema_analysis = 'sell'
# Check for bullish trends
elif current_price > last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].max():
ema_analysis = 'buy'
# Check for bearish trends
elif current_price < last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].min():
ema_analysis = 'sell'
# Check for bullish candlestick pattern
if (open<close and
previous_open>previous_close and
close>previous_open and
open<=previous_close):
candle_analysis = 'buy'
# Check for bearish candlestick pattern
elif (open>close and
previous_open<previous_close and
close<previous_open and
open>=previous_close):
candle_analysis = 'sell'
# Check if both analyses are the same and not empty
if (candle_analysis == ema_analysis) and (candle_analysis != ''):
return candle_analysis
# If no agreement found, check for EMA only
elif (ema_analysis != ''):
return ema_analysis
# If no signal is found, check for candlestick analysis only
elif (candle_analysis != ''):
return candle_analysis
# If no signal is found, return an empty string
return ''
But it returning me signal to buy every time , please solve this problem
|
49d479d1ac82319038a818312103ee2d
|
{
"intermediate": 0.3177639842033386,
"beginner": 0.34850090742111206,
"expert": 0.3337351083755493
}
|
13,594
|
I used your signal_generator returning code, but it doesn't return me anything , please solve it . Code of signal_generator: def signal_generator(df):
# Calculate EMA and MA lines
df['EMA5'] = df['Close'].ewm(span=5, adjust=False).mean()
df['EMA10'] = df['Close'].ewm(span=10, adjust=False).mean()
df['EMA20'] = df['Close'].ewm(span=20, adjust=False).mean()
df['EMA50'] = df['Close'].ewm(span=50, adjust=False).mean()
df['EMA100'] = df['Close'].ewm(span=100, adjust=False).mean()
df['EMA200'] = df['Close'].ewm(span=200, adjust=False).mean()
df['MA10'] = df['Close'].rolling(window=10).mean()
df['MA20'] = df['Close'].rolling(window=20).mean()
df['MA50'] = df['Close'].rolling(window=50).mean()
df['MA100'] = df['Close'].rolling(window=100).mean()
open = df.Open.iloc[-1]
close = df.Close.iloc[-1]
previous_open = df.Open.iloc[-2]
previous_close = df.Close.iloc[-2]
# Calculate the last candlestick
# Calculate EMA and MA lines
last_candle = df.iloc[-1]
current_price = df.Close.iloc[-1]
ema_analysis = ''
candle_analysis = ''
# EMA crossover - buy signal
if df.EMA10.iloc[-1] > df.EMA50.iloc[-1] and current_price > last_candle[['EMA10', 'EMA50']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA10.iloc[-1] < df.EMA50.iloc[-1] and current_price < last_candle[['EMA10', 'EMA50']].iloc[-1].max():
ema_analysis = 'sell'
# EMA crossover - buy signal
if df.EMA20.iloc[-1] > df.EMA200.iloc[-1] and current_price > last_candle[['EMA20', 'EMA200']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA20.iloc[-1] < df.EMA200.iloc[-1] and current_price < last_candle[['EMA20', 'EMA200']].iloc[-1].max():
ema_analysis = 'sell'
# Check for bullish trends
elif current_price > last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].max():
ema_analysis = 'buy'
# Check for bearish trends
elif current_price < last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].min():
ema_analysis = 'sell'
# Check for bullish candlestick pattern
if (open<close and
previous_open>previous_close and
close>previous_open and
open<=previous_close):
candle_analysis = 'buy'
# Check for bearish candlestick pattern
elif (open>close and
previous_open<previous_close and
close<previous_open and
open>=previous_close):
candle_analysis = 'sell'
# If there is agreement in both signals
if candle_analysis == ema_analysis and ema_analysis != '':
return ema_analysis
# If there is no agreement in the signals
elif candle_analysis != ema_analysis:
if candle_analysis == 'buy' and ema_analysis == 'sell':
return ''
elif candle_analysis == 'sell' and ema_analysis == 'buy':
return ''
else:
return candle_analysis
# If no signal is found, return a neutral signal
return ''
|
c6fe6a30611bae8f70dded4b2a36f361
|
{
"intermediate": 0.3960312604904175,
"beginner": 0.3436223268508911,
"expert": 0.260346382856369
}
|
13,595
|
<link rel="stylesheet" href="styles.css">
|
88d2ec6eb8ebe80fa3496d9cc2c6aeda
|
{
"intermediate": 0.36561331152915955,
"beginner": 0.25904569029808044,
"expert": 0.3753410279750824
}
|
13,596
|
write me a script with C# for moving and jumping in my new 2d game
|
af35174f3483eea9c76b6bf5f22866ae
|
{
"intermediate": 0.44945716857910156,
"beginner": 0.3719117045402527,
"expert": 0.17863112688064575
}
|
13,597
|
Create an html 5 code for a personal page where I display content as a web developer called Glitch make a professional code not simply
|
d93a76e7af5443e63ed1a3c6853f102d
|
{
"intermediate": 0.3487488627433777,
"beginner": 0.353611558675766,
"expert": 0.2976396083831787
}
|
13,598
|
how do i implelment remeberme fucntionality when my backend sends accestoken and refreshtoken
|
5aa336c803c5df126770734545c1f816
|
{
"intermediate": 0.4619596004486084,
"beginner": 0.22016429901123047,
"expert": 0.31787610054016113
}
|
13,599
|
Write a program in Python that takes the last 10 sports news from the yjc.ir website and stores it in the mongodb database. Then, every 8 hours, it receives the last 10 news of the site and saves it in the database. If the messages are duplicates, do not save them.
Please explain the steps in order.
|
6342b8961d41724c4002eb267de61718
|
{
"intermediate": 0.4355996549129486,
"beginner": 0.19513896107673645,
"expert": 0.36926138401031494
}
|
13,600
|
Write a program in Python that takes the last 10 sports news from the yjc.ir website and stores it in the mongodb database. Then, every 8 hours, it receives the last 10 news of the site and saves it in the database. If the messages are duplicates, do not save them.
Please explain the steps in order.
This program stores the content of each news page in the database.
For example, links to photos and videos, news text, news URL, news date, etc.
|
ea3216002b450f27233eb5acdb1b0a36
|
{
"intermediate": 0.4828522205352783,
"beginner": 0.16304980218410492,
"expert": 0.35409796237945557
}
|
13,601
|
使用SDL2 获取左摇杆输入 并转化为is_left, is_right, is_up, is_down
|
ed1c10e4b3f520453730489c10d9032c
|
{
"intermediate": 0.1724146455526352,
"beginner": 0.6472118496894836,
"expert": 0.18037354946136475
}
|
13,602
|
write me a guid for learning phyton
|
9c053942164ee5d621a9ee8e719a67df
|
{
"intermediate": 0.26438120007514954,
"beginner": 0.4708479046821594,
"expert": 0.26477089524269104
}
|
13,603
|
How to make fake code that looks legit?
|
06a5036a423b195f07755357f28031f1
|
{
"intermediate": 0.20195461809635162,
"beginner": 0.3372631371021271,
"expert": 0.4607822895050049
}
|
13,604
|
I used your signal generator code: def signal_generator(df):
# Calculate EMA and MA lines
df['EMA5'] = df['Close'].ewm(span=5, adjust=False).mean()
df['EMA10'] = df['Close'].ewm(span=10, adjust=False).mean()
df['EMA20'] = df['Close'].ewm(span=20, adjust=False).mean()
df['EMA50'] = df['Close'].ewm(span=50, adjust=False).mean()
df['EMA100'] = df['Close'].ewm(span=100, adjust=False).mean()
df['EMA200'] = df['Close'].ewm(span=200, adjust=False).mean()
df['MA10'] = df['Close'].rolling(window=10).mean()
df['MA20'] = df['Close'].rolling(window=20).mean()
df['MA50'] = df['Close'].rolling(window=50).mean()
df['MA100'] = df['Close'].rolling(window=100).mean()
open = df.Open.iloc[-1]
close = df.Close.iloc[-1]
previous_open = df.Open.iloc[-2]
previous_close = df.Close.iloc[-2]
# Calculate the last candlestick
# Calculate EMA and MA lines
last_candle = df.iloc[-1]
current_price = df.Close.iloc[-1]
ema_analysis = ''
candle_analysis = ''
# EMA crossover - buy signal
if df.EMA10.iloc[-1] > df.EMA50.iloc[-1] and current_price > last_candle[['EMA10', 'EMA50']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA10.iloc[-1] < df.EMA50.iloc[-1] and current_price < last_candle[['EMA10', 'EMA50']].iloc[-1].max():
ema_analysis = 'sell'
# EMA crossover - buy signal
if df.EMA20.iloc[-1] > df.EMA200.iloc[-1] and current_price > last_candle[['EMA20', 'EMA200']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA20.iloc[-1] < df.EMA200.iloc[-1] and current_price < last_candle[['EMA20', 'EMA200']].iloc[-1].max():
ema_analysis = 'sell'
# Check for bullish trends
elif current_price > last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].max():
ema_analysis = 'buy'
# Check for bearish trends
elif current_price < last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].min():
ema_analysis = 'sell'
# Check for bullish candlestick pattern
if (open<close and
previous_open>previous_close and
close>previous_open and
open<=previous_close):
candle_analysis = 'buy'
# Check for bearish candlestick pattern
elif (open>close and
previous_open<previous_close and
close<previous_open and
open>=previous_close):
candle_analysis = 'sell'
# If there is agreement in both signals
if candle_analysis == ema_analysis and ema_analysis != '':
return ema_analysis
# If there is no agreement in the signals
elif candle_analysis != ema_analysis:
if candle_analysis == 'buy' and ema_analysis == 'sell':
return ''
elif candle_analysis == 'sell' and ema_analysis == 'buy':
return ''
else:
return candle_analysis
# If no signal is found, return a neutral signal
return ''
df = get_klines(symbol, '1m', 44640)
But it gave me wrong signal , what I need to change in my code?
|
d99ff0e0b5215de0a686ef67b8332206
|
{
"intermediate": 0.33626747131347656,
"beginner": 0.31106841564178467,
"expert": 0.35266414284706116
}
|
13,605
|
How does working this signal generator code: def signal_generator(df):
# Calculate EMA and MA lines
df['EMA5'] = df['Close'].ewm(span=5, adjust=False).mean()
df['EMA10'] = df['Close'].ewm(span=10, adjust=False).mean()
df['EMA20'] = df['Close'].ewm(span=20, adjust=False).mean()
df['EMA50'] = df['Close'].ewm(span=50, adjust=False).mean()
df['EMA100'] = df['Close'].ewm(span=100, adjust=False).mean()
df['EMA200'] = df['Close'].ewm(span=200, adjust=False).mean()
df['MA10'] = df['Close'].rolling(window=10).mean()
df['MA20'] = df['Close'].rolling(window=20).mean()
df['MA50'] = df['Close'].rolling(window=50).mean()
df['MA100'] = df['Close'].rolling(window=100).mean()
open = df.Open.iloc[-1]
close = df.Close.iloc[-1]
previous_open = df.Open.iloc[-2]
previous_close = df.Close.iloc[-2]
# Calculate the last candlestick
# Calculate EMA and MA lines
last_candle = df.iloc[-1]
current_price = df.Close.iloc[-1]
ema_analysis = ''
candle_analysis = ''
# EMA crossover - buy signal
if df.EMA10.iloc[-1] > df.EMA50.iloc[-1] and current_price > last_candle[['EMA10', 'EMA50']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA10.iloc[-1] < df.EMA50.iloc[-1] and current_price < last_candle[['EMA10', 'EMA50']].iloc[-1].max():
ema_analysis = 'sell'
# EMA crossover - buy signal
if df.EMA20.iloc[-1] > df.EMA200.iloc[-1] and current_price > last_candle[['EMA20', 'EMA200']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA20.iloc[-1] < df.EMA200.iloc[-1] and current_price < last_candle[['EMA20', 'EMA200']].iloc[-1].max():
ema_analysis = 'sell'
# Check for bullish trends
elif current_price > last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].max():
ema_analysis = 'buy'
# Check for bearish trends
elif current_price < last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].min():
ema_analysis = 'sell'
# Check for bullish candlestick pattern
if (open<close and
previous_open>previous_close and
close>previous_open and
open<=previous_close):
candle_analysis = 'buy'
# Check for bearish candlestick pattern
elif (open>close and
previous_open<previous_close and
close<previous_open and
open>=previous_close):
candle_analysis = 'sell'
# If there is agreement in both signals
if candle_analysis == ema_analysis and ema_analysis != '':
return ema_analysis
# If there is no agreement in the signals
elif candle_analysis != ema_analysis:
if candle_analysis == 'buy' and ema_analysis == 'sell':
return ''
elif candle_analysis == 'sell' and ema_analysis == 'buy':
return ''
else:
return candle_analysis
# If no signal is found, return a neutral signal
return ''
|
b3ae306d24ee9f8adbf0d39e1cd6d8ce
|
{
"intermediate": 0.32643231749534607,
"beginner": 0.3369493782520294,
"expert": 0.33661824464797974
}
|
13,606
|
This code
|
35fc981e17595ad60770ac40126d7845
|
{
"intermediate": 0.2756560444831848,
"beginner": 0.347513347864151,
"expert": 0.3768306076526642
}
|
13,607
|
how do i use Gpt4-X-Alpaca in c#
|
41c354b98fe46bc2bc635d87bb803b0d
|
{
"intermediate": 0.5813146829605103,
"beginner": 0.1282576024532318,
"expert": 0.29042771458625793
}
|
13,608
|
What is the more common used programming language, python or java or c++
|
621bd1e2262e9440e0f55f692b1fbe93
|
{
"intermediate": 0.31021392345428467,
"beginner": 0.38162022829055786,
"expert": 0.30816587805747986
}
|
13,609
|
how can I access shadow-root with kantu automation script
|
b5a1084b2ad95ecd11d1594367a2b190
|
{
"intermediate": 0.4633878767490387,
"beginner": 0.1367824226617813,
"expert": 0.3998296856880188
}
|
13,610
|
What does "cd -" do in Linux?
|
5ecda5a3087556c58986112c9d362dc6
|
{
"intermediate": 0.4171990752220154,
"beginner": 0.36808323860168457,
"expert": 0.21471774578094482
}
|
13,611
|
I used this signal_generator code: def signal_generator(df):
# Calculate EMA and MA lines
df['EMA5'] = df['Close'].ewm(span=5, adjust=False).mean()
df['EMA10'] = df['Close'].ewm(span=10, adjust=False).mean()
df['EMA20'] = df['Close'].ewm(span=20, adjust=False).mean()
df['EMA50'] = df['Close'].ewm(span=50, adjust=False).mean()
df['EMA100'] = df['Close'].ewm(span=100, adjust=False).mean()
df['EMA200'] = df['Close'].ewm(span=200, adjust=False).mean()
df['MA10'] = df['Close'].rolling(window=10).mean()
df['MA20'] = df['Close'].rolling(window=20).mean()
df['MA50'] = df['Close'].rolling(window=50).mean()
df['MA100'] = df['Close'].rolling(window=100).mean()
open = df.Open.iloc[-1]
close = df.Close.iloc[-1]
high = df.High.iloc[-1]
low = df.Low.iloc[-1]
previous_open = df.Open.iloc[-2]
previous_close = df.Close.iloc[-2]
# Calculate the last candlestick
last_candle = df.iloc[-1]
current_price = df.Close.iloc[-1]
# Initialize analysis variables
ema_analysis = ''
candle_analysis = ''
# EMA crossover - buy signal
if df.EMA10.iloc[-1] > df.EMA50.iloc[-1] and current_price > last_candle[['EMA10', 'EMA50']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA10.iloc[-1] < df.EMA50.iloc[-1] and current_price < last_candle[['EMA10', 'EMA50']].iloc[-1].max():
ema_analysis = 'sell'
# EMA crossover - buy signal
if df.EMA20.iloc[-1] > df.EMA200.iloc[-1] and current_price > last_candle[['EMA20', 'EMA200']].iloc[-1].min():
ema_analysis = 'buy'
# EMA crossover - sell signal
elif df.EMA20.iloc[-1] < df.EMA200.iloc[-1] and current_price < last_candle[['EMA20', 'EMA200']].iloc[-1].max():
ema_analysis = 'sell'
# Check for bullish trends
elif current_price > last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].max():
ema_analysis = 'buy'
# Check for bearish trends
elif current_price < last_candle[['EMA5', 'EMA20', 'EMA50', 'EMA100', 'EMA200']].iloc[-1].min():
ema_analysis = 'sell'
# Check for bullish candlestick pattern
if (close>open and
close/open > 1.002 and
open/low <= 1.005 and
close/high >= 1.01):
candle_analysis = 'buy'
# Check for bearish candlestick pattern
if (open>close and
open/close > 1.002 and
close/high <= 1.005 and
open/low >= 1.01):
candle_analysis = 'sell'
# If there is agreement in both signals
if candle_analysis == ema_analysis and ema_analysis != 'none':
return ema_analysis
# If there is no agreement in the signals
elif candle_analysis != ema_analysis:
if candle_analysis == 'buy' and ema_analysis == 'sell':
return ''
elif candle_analysis == 'sell' and ema_analysis == 'buy':
return ''
else:
return candle_analysis
# If no signal is found, return a neutral signal
return ''
Please can you wright same code without this > 1.002 and <= 1.005
|
221ad2ee271a868612196c59698e2fa4
|
{
"intermediate": 0.282931923866272,
"beginner": 0.4038976728916168,
"expert": 0.313170462846756
}
|
13,612
|
How can I check in bash if there's a new commit in a cloned git repo?
|
256b59315a27edda807cf7e06d4dec2c
|
{
"intermediate": 0.5793838500976562,
"beginner": 0.16884061694145203,
"expert": 0.2517755627632141
}
|
13,613
|
How do I get the hash of the latest commit from github?
|
46a2bea17d7a1e9fc745892054ebabc1
|
{
"intermediate": 0.434993714094162,
"beginner": 0.1964973360300064,
"expert": 0.3685089349746704
}
|
13,614
|
How do I compare 2 strings in bash and run a command if they match?
|
df3760de53df80aebdbf9ada57f3d195
|
{
"intermediate": 0.624082088470459,
"beginner": 0.15606053173542023,
"expert": 0.2198573499917984
}
|
13,615
|
What are Key-value Databases?
Explain its types fully with examples and applications
|
d37e233691129da4044fc6fea570438a
|
{
"intermediate": 0.3151310682296753,
"beginner": 0.35868939757347107,
"expert": 0.32617953419685364
}
|
13,616
|
Can you code me a pinescript tradingview strategy bassed on SMA 20 Every time when price is above and give es bullish engulfing enter for a buy and ever time price is below and gives bearish engulfing enter for a sell. Stop loss will be 1% of the capital and take profit at 296
|
525db93d190e974857fabc7307817453
|
{
"intermediate": 0.279995858669281,
"beginner": 0.15883712470531464,
"expert": 0.5611670613288879
}
|
13,617
|
What does BackgroundScheduler do and why is it important?
|
d689a9d087527ed7cfff0065d4ae5f40
|
{
"intermediate": 0.3980548083782196,
"beginner": 0.19883105158805847,
"expert": 0.4031141698360443
}
|
13,618
|
hi, do you know how to use pygame?
|
f2a3edbda9b6fd12d7fe6e3b8de9d2d2
|
{
"intermediate": 0.44231951236724854,
"beginner": 0.2572760283946991,
"expert": 0.30040445923805237
}
|
13,619
|
What's the syntax for granting object privilege to a Role?
|
aca00507bc263cc494c1abe5cacb7437
|
{
"intermediate": 0.3041544258594513,
"beginner": 0.47802042961120605,
"expert": 0.21782512962818146
}
|
13,620
|
You are a veteran swift playgrounds programmer. Can you explain to me how to make a basic rpg in swift playgrounds including code? And please explain what each line of code does in a way a beginner can understand?
|
f4853358b4557b208d4cc05632cadfcf
|
{
"intermediate": 0.3321518003940582,
"beginner": 0.5314307808876038,
"expert": 0.1364174634218216
}
|
13,621
|
How would I make an app similar to dragonbones in swift playgrounds, please include code and explain every line, and what the syntax means
|
9d8cda05746b754974cc6fd090c2d83c
|
{
"intermediate": 0.3315613269805908,
"beginner": 0.5963835716247559,
"expert": 0.07205504924058914
}
|
13,622
|
Can you write me swift code for swift playgrounds that would make an app similar to spine 2d?
|
c4bde53f9b1e16374d65c21f8a15e746
|
{
"intermediate": 0.5469686388969421,
"beginner": 0.2112370878458023,
"expert": 0.24179421365261078
}
|
13,623
|
What is the PowerShell command for creating a new directory/folder?
|
ac2f27f697a56910bb707ea0730c53c8
|
{
"intermediate": 0.3687533736228943,
"beginner": 0.340206503868103,
"expert": 0.2910401225090027
}
|
13,624
|
Can you explain to me the basic syntax of swift for swift playgrounds as if I were a beginner
|
8fcc8f488561fbfc853a96ccb0a3c649
|
{
"intermediate": 0.3341538608074188,
"beginner": 0.5814879536628723,
"expert": 0.08435819298028946
}
|
13,625
|
Below is a script for creating a new PDB. Some information are missing. Choose the option with the complete/correct query.
CREATE PLUGGABLE DATABASE University
ADMIN USER uni_admin IDENTIFIED BY uniadmin123
ROLES
DEFAULT TABLESPACE
DATAFILE '/u01/app/oracle/oradata/ORCL/University'
FILE_NAME_CONVERT = ('/u01/app/oracle/oradata/ORCL/',
'/u01/app/oracle/oradata/ORCL/University/');
|
5c40794faef2efbae6b3bad83ca4f55c
|
{
"intermediate": 0.2983373701572418,
"beginner": 0.43402254581451416,
"expert": 0.2676401138305664
}
|
13,626
|
как улучшить этот код
private IEnumerable<KeyValuePair<string, string>> BeatmapQuery(GetBeatmapOptions options) {
return new Dictionary<string, string>() {
{ "k", AccessToken },
{ "since", options.Since?.ToUniversalTime().ToString("yyyy-MM-dd HH:mm:ss") },
{ "s", options.BeatmapSetId?.ToString() },
{ "b", options.BeatmapId?.ToString() },
{ "u", options.User },
{ "type", options.Type },
{ "m", ((int?)options.Mode)?.ToString() },
{ "a", options.ConvertedBeatmaps == true ? "1" : "0" },
{ "h", options.Hash },
{ "limit", options.Limit?.ToString() },
{ "mods", options.Mods?.ToString() }
}.Where(kv => kv.Value != null);
}
private IEnumerable<KeyValuePair<string, string>> UserQuery(GetUserOptions options) {
return new Dictionary<string, string>() {
{ "k", AccessToken },
{ "u", options.User.ToString() },
{ "m", ((int)options.Mode).ToString() },
{ "type", options.Type },
{ "event_days", options.EventDays?.ToString() }
}.Where(kv => kv.Value != null);
}
private IEnumerable<KeyValuePair<string, string>> UserBestQuery(GetUserBestOptions options) {
return new Dictionary<string, string>() {
{ "k", AccessToken },
{ "u", options.User },
{ "m", ((int?)options.Mode)?.ToString() },
{ "limit", options.Limit?.ToString() },
{ "type", options.Type }
}.Where(kv => kv.Value != null);
}
private IEnumerable<KeyValuePair<string, string>> UserRecentQuery(GetUserRecentOptions options) {
return new Dictionary<string, string>() {
{ "k", AccessToken },
{ "u", options.User },
{ "m", ((int?)options.Mode)?.ToString() },
{ "limit", options.Limit?.ToString() },
{ "type", options.Type}
}.Where(kv => kv.Value != null);
}
private IEnumerable<KeyValuePair<string, string>> ScoresQuery(GetScoresOptions options) {
return new Dictionary<string, string>() {
{ "k", AccessToken },
{ "b", options.BeatmapId?.ToString() },
{ "u", options.User },
{ "m", ((int)options.Mode).ToString() },
{ "mods", options.Mods?.ToString() },
{ "type", options.Type},
{ "limit", options.Limit?.ToString() }
}.Where(kv => kv.Value != null);
}
private IEnumerable<KeyValuePair<string, string>> MultiplayerQuery(GetMultiplayerOptions options) {
return new Dictionary<string, string>() {
{ "k", AccessToken },
{ "mp", options.MatchId.ToString() }
}.Where(kv => kv.Value != null);
}
private IEnumerable<KeyValuePair<string, string>> ReplayQuery(GetReplayOptions options) {
return new Dictionary<string, string>() {
{ "k", AccessToken },
{ "b", options.BeatmapId.ToString() },
{ "u", options.User },
{ "m", ((int?)options.Mode)?.ToString() },
{ "s", options.ScoreId },
{ "type", options.Type },
{ "mods", ((int?)options.Mods)?.ToString() }
}.Where(kv => kv.Value != null);
}
|
2d030d9140ec591c8ce1f3b737d1a417
|
{
"intermediate": 0.2754054069519043,
"beginner": 0.5367451906204224,
"expert": 0.18784943222999573
}
|
13,627
|
when should i use a normal class vs a sealed class
|
ddfdeaea98390d12c06d1ad052ac9f98
|
{
"intermediate": 0.27315211296081543,
"beginner": 0.5620220899581909,
"expert": 0.16482578217983246
}
|
13,628
|
I'll provide a DQN implementation code and I want you to come with your own version of the DQN based on the given one for me to use it on a EvaderNode. Here is the code: #!/usr/bin/env python3
import rospy
import os
import json
import numpy as np
import matplotlib.pyplot as plt
import random
import time
import sys
sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
from collections import deque
from collections import namedtuple
from std_msgs.msg import Float32MultiArray
from dqn.env.env_dqn_LIDAR import Env
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter('DDPG_log/DQN_ref/1120_3/wo_BN')
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
#device = torch.device("cpu")
Transition = namedtuple('Transition', ('state', 'action', 'next_state', 'reward'))
EPISODES = 1000
class DQN_old(nn.Module):
def __init__(self, state_size, action_size):
super(DQN_old, self).__init__()
self.state_size = state_size
self.action_size = action_size
self.fc1 = nn.Linear(self.state_size, 200) #300dldjTdma
self.bn1 = nn.BatchNorm1d(200)
self.drp1 = nn.Dropout(0.2)
nn.init.kaiming_normal_(self.fc1.weight)
self.fc2 = nn.Linear(200, 200)
self.bn2 = nn.BatchNorm1d(200)
self.drp2 = nn.Dropout(0.2)
nn.init.kaiming_normal_(self.fc2.weight)
self.fc3 = nn.Linear(200, 100)
self.bn3 = nn.BatchNorm1d(100)
self.drp3 = nn.Dropout(0.2)
nn.init.kaiming_normal_(self.fc3.weight)
self.fc4 = nn.Linear(100, 64)
self.bn4 = nn.BatchNorm1d(64)
self.drp4 = nn.Dropout(0.6)
nn.init.kaiming_normal_(self.fc4.weight)
self.fc9 = nn.Linear(64, self.action_size)
nn.init.kaiming_normal_(self.fc9.weight)
def forward(self, x):
x = F.leaky_relu(self.bn1(self.fc1(x)))
x = self.drp1(x)
x = F.leaky_relu(self.bn2(self.fc2(x)))
x = self.drp2(x)
x = F.leaky_relu(self.bn3(self.fc3(x)))
x = self.drp3(x)
x = F.leaky_relu(self.bn4(self.fc4(x)))
x = self.drp4(x)
x = self.fc9(x)
return x
class DQN(nn.Module):
def __init__(self, state_size, action_size):
super(DQN, self).__init__()
self.state_size = state_size
self.action_size = action_size
self.fc1 = nn.Linear(self.state_size, 128) #300dldjTdma
self.bn1 = nn.BatchNorm1d(128)
self.drp1 = nn.Dropout(0.3)
nn.init.kaiming_normal_(self.fc1.weight)
self.fc2 = nn.Linear(128, 128)
self.bn2 = nn.BatchNorm1d(128)
self.drp2 = nn.Dropout(0.3)
nn.init.kaiming_normal_(self.fc2.weight)
self.fc3 = nn.Linear(128, 64)
self.bn3 = nn.BatchNorm1d(64)
self.drp3 = nn.Dropout(0.3)
nn.init.kaiming_normal_(self.fc3.weight)
self.fc4 = nn.Linear(64, 64)
self.bn4 = nn.BatchNorm1d(64)
self.drp4 = nn.Dropout(0.3)
nn.init.kaiming_normal_(self.fc4.weight)
self.fc5 = nn.Linear(64, self.action_size)
nn.init.kaiming_normal_(self.fc5.weight)
#nn.init.kaiming_normal_(self.fc1.weight)
#nn.init.kaiming_normal_(self.fc2.weight)
#nn.init.kaiming_normal_(self.fc3.weight)
def forward(self, x):
#x = F.leaky_relu(self.bn1(self.fc1(x)))
x = F.relu(self.fc1(x))
#x = self.drp1(x)
#x = F.leaky_relu(self.fc2(x))
#x = self.bn2(x)
#x = self.drp2(x)
#x = F.leaky_relu(self.bn3(self.fc3(x)))
x = F.elu(self.fc3(x))
#x = self.drp3(x)
#x = F.leaky_relu(self.bn4(self.fc4(x)))
x = F.relu(self.fc4(x))
#x = self.drp4(x)
x = self.fc5(x)
return x
class ReplayMemory():
def __init__(self, capacity):
self.capacity = capacity
self.memory = []
self.index = 0
def push(self, state, action, state_next, reward):
"""transition 저장"""
if len(self.memory) < self.capacity:
self.memory.append(None)
self.memory[self.index] = Transition(state, action, state_next, reward)
self.index = (self.index + 1) % self.capacity
def sample(self, batch_size):
return random.sample(self.memory, batch_size)
def __len__(self):
return len(self.memory)
class Brain():
def __init__(self, state_size, action_size):
#self.pub_result = rospy.Publisher('result', Float32MultiArray, queue_size=5)
self.dirPath = os.path.dirname(os.path.realpath(__file__))
self.date = '1120_2'
self.dirPath = self.dirPath.replace('src/dqn', 'save_model/'+self.date+'/dqn_lidar_')
self.result = Float32MultiArray()
self.load_model = False
self.load_episode = 0
self.state_size = state_size
self.action_size = action_size
self.episode_step = 3000
self.target_update = 2000
self.discount_factor = 0.99
self.learning_rate = 0.0001
self.epsilon = 1.0
self.epsilon_decay = 0.985
self.epsilon_min = 0.01
self.batch_size = 100
self.train_start = 1000
self.memory = ReplayMemory(200000)
self.model = DQN(self.state_size, self.action_size).to(device)
self.target_model = DQN(self.state_size, self.action_size).to(device)
print(self.model)
self.loss = 0.0
self.optimizer = optim.RMSprop(self.model.parameters(), lr=self.learning_rate)
#self.scheduler = optim.lr_scheduler.MultiStepLR(self.optimizer, milestones=[300,400], gamma=0.5, verbose=True)
def decide_action(self, state, episode):
if np.random.rand() >= self.epsilon:
print("모델에 의한 행동선택")
self.model.eval()
with torch.no_grad():
action = self.model(state).max(1)[1].view(1,1)
#print("action : ", action.item())
else:
action = torch.LongTensor([[random.randrange(self.action_size)]]).to(device)
print("무작위 행동선택")
return action
def replay(self):
if len(self.memory) < self.train_start:
return
self.mini_batch, self.state_batch, self.action_batch, self.reward_batch, self.non_final_next_states = self.make_minibatch()
self.expected_state_action_values = self.get_expected_state_action_values()
self.update_q_network()
def make_minibatch(self):
transitions = self.memory.sample(self.batch_size)
mini_batch = Transition(*zip(*transitions))
#print("메모리에서 랜덤 샘플")
state_batch = torch.cat(mini_batch.state)
action_batch = torch.cat(mini_batch.action)
reward_batch = torch.cat(mini_batch.reward)
non_final_next_states = torch.cat([s for s in mini_batch.next_state if s is not None])
return mini_batch, state_batch, action_batch, reward_batch, non_final_next_states
def get_expected_state_action_values(self):
self.model.eval()
self.target_model.eval()
#print(self.state_batch.shape)
self.state_action_values = self.model(self.state_batch).gather(1, self.action_batch)
non_final_mask = torch.tensor(tuple(map(lambda s: s is not None, self.mini_batch.next_state)), dtype=torch.bool).to(device)
next_state_values = torch.zeros(self.batch_size).to(device)
a_m = torch.zeros(self.batch_size, dtype=torch.long).to(device)
a_m[non_final_mask] = self.model(self.non_final_next_states).detach().max(1)[1]
a_m_non_final_next_states = a_m[non_final_mask].view(-1, 1)
next_state_values[non_final_mask] = self.target_model(self.non_final_next_states).gather(1, a_m_non_final_next_states).detach().squeeze()
expected_state_action_values = self.reward_batch + self.discount_factor*next_state_values
return expected_state_action_values
def update_q_network(self):
self.model.train()
self.loss = F.smooth_l1_loss(self.state_action_values, self.expected_state_action_values.unsqueeze(1))
#loss = F.smooth_l1_loss(self.state_action_values, self.expected_state_action_values.unsqueeze(1)) #원래 이거였음
#print("모델 훈련")
self.optimizer.zero_grad()
self.loss.backward()
#print("loss:%0.4f" % self.loss)
#loss.backward() #원래 이거였음
#for param in self.model.parameters():
# param.grad.data.clamp_(-1, 1)
self.optimizer.step()
def update_target_q_network(self):
#print("타겟모델 업데이트")
self.target_model.load_state_dict(self.model.state_dict())
class Agent():
def __init__(self, state_size, action_size):
self.brain = Brain(state_size, action_size)
def update_q_function(self):
self.brain.replay()
def get_action(self, state, episode):
action = self.brain.decide_action(state, episode)
return action
def memorize(self, state, action, state_next, reward):
self.brain.memory.push(state, action, state_next, reward)
def update_target_q_function(self):
self.brain.update_target_q_network()
if __name__ == '__main__':
rospy.init_node('mobile_robot_dqn')
pub_get_action = rospy.Publisher('get_action', Float32MultiArray, queue_size=5)
get_action = Float32MultiArray()
pub_result = rospy.Publisher('result', Float32MultiArray, queue_size=5)
pub_loss_result = rospy.Publisher('loss_result', Float32MultiArray, queue_size=5)
result = Float32MultiArray()
loss_result = Float32MultiArray()
#210 + 4 = 214 , 105 + 4=109
state_size = 4 #214 #4 #109 #214
action_size = 7
env = Env(action_size)
agent = Agent(state_size, action_size)
scores, losses, episodes = [], [], []
global_step = 0
start_time = time.time()
time.sleep(2)
for episode in range(agent.brain.load_episode + 1, EPISODES):
#print("Episode:",episode)
time_out = False
done = False
state = env.reset(episode)
#old_action = 3
# print("Episode:",episode, "state:",state)
state = torch.from_numpy(state).type(torch.FloatTensor)
state = torch.unsqueeze(state, 0).to(device)
score = 0
score_ang_vel_reward = 0
losses = 0.0
t = 0
while True:
#for t in range(agent.brain.episode_step):
t += 1
action = agent.get_action(state, episode)
print("step: ", t, " episode: ", episode)
observation_next, reward, done = env.step(action.item(), episode)
#print("Reward: ", reward)
reward = (torch.tensor([reward]).type(torch.FloatTensor)).to(device)
state_next = observation_next
state_next = torch.from_numpy(state_next).type(torch.FloatTensor)
state_next = torch.unsqueeze(state_next, 0).to(device)
agent.memorize(state, action, state_next, reward)
agent.update_q_function()
state = state_next
old_action = action.item()
score += reward
losses += agent.brain.loss
get_action.data = [action.int(), score, reward.int()]
pub_get_action.publish(get_action)
if t >= agent.brain.episode_step:
rospy.loginfo("Time out!!")
time_out = True
if done:
#agent.update_target_q_function()
#rospy.loginfo("UPDATE TARGET NETWORK")
state_next = None
rospy.loginfo('Ep: %d score: %.2f memory: %d epsilon: %.2f' % (episode, score, len(agent.brain.memory), agent.brain.epsilon))
#scores.append(score)
#episodes.append(episode)
state = env.reset(episode)
# print("Episode:",episode, "state:",state)
state = torch.from_numpy(state).type(torch.FloatTensor)
state = torch.unsqueeze(state, 0).to(device)
if time_out:
state_next = None
#agent.update_target_q_function()
#rospy.loginfo("UPDATE TARGET NETWORK")
rospy.loginfo('Ep: %d score: %.2f memory: %d epsilon: %.2f' % (episode, score, len(agent.brain.memory), agent.brain.epsilon))
scores.append(score)
#losses.append(agent.brain.loss)
episodes.append(episode)
result.data = [score, episode]
loss_result.data = [losses/agent.brain.episode_step, episode]
pub_result.publish(result)
pub_loss_result.publish(loss_result)
#writer.add_scalar("score", score, episode)
#writer.add_scalar("loss", losses/agent.brain.episode_step, episode)
#state = env.reset()
## print("Episode:",episode, "state:",state)
#state = torch.from_numpy(state).type(torch.FloatTensor)
#state = torch.unsqueeze(state, 0).to(device)
break
agent.update_target_q_function()
rospy.loginfo("UPDATE TARGET NETWORK")
writer.add_scalar("Score", score, episode)
writer.add_scalar("Loss", losses/agent.brain.episode_step, episode)
if agent.brain.epsilon > agent.brain.epsilon_min:
agent.brain.epsilon *= agent.brain.epsilon_decay
if episode % 2 == 0:
#agent.update_target_q_function()
#rospy.loginfo("UPDATE TARGET NETWORK")
with torch.no_grad():
torch.save(agent.brain.model, agent.brain.dirPath + str(episode) + '.pt')
#elif episode % 4 == 0:
# agent.update_target_q_function()
# rospy.loginfo("UPDATE TARGET NETWORK")
with torch.no_grad():
torch.save(agent.brain.model, agent.brain.dirPath + str(episode) + '.pt')
print("종료")
writer.close()
|
17d5974c95f84bfed96790cd620455dc
|
{
"intermediate": 0.2961089015007019,
"beginner": 0.4446121156215668,
"expert": 0.2592790126800537
}
|
13,629
|
I want you to act as an instructor in a school, teaching algorithms to beginners and respond in Chinese. You will provide code examples using python programming language. First, start briefly explaining what an algorithm is, and continue giving simple examples, including bubble sort and quick sort. Later, wait for my prompt for additional questions. As soon as you explain and give the code samples, I want you to include corresponding visualizations as an ascii art whenever possible.
|
466e6f0bc3a89b68537662f8bfa105db
|
{
"intermediate": 0.1346883922815323,
"beginner": 0.1525818407535553,
"expert": 0.7127297520637512
}
|
13,630
|
hi there
|
11a5bb35a8fa938510f6280f6d984351
|
{
"intermediate": 0.32885003089904785,
"beginner": 0.24785484373569489,
"expert": 0.42329514026641846
}
|
13,631
|
What script in beautiful soup would I use to scrape google.com, as an example, for their logo. To give me an idea how to write one
|
db17b50c8f0f817eb287c90a3d59cdc6
|
{
"intermediate": 0.38895654678344727,
"beginner": 0.3728185296058655,
"expert": 0.23822489380836487
}
|
13,632
|
how to use iperf3
|
a8130cb9884e0d2344e97ce2814cf40a
|
{
"intermediate": 0.3914676606655121,
"beginner": 0.1728363335132599,
"expert": 0.43569597601890564
}
|
13,633
|
hi
|
ef7417a2559134030b2bf655c2f4f578
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
13,634
|
ho
|
e7081ae4b20d14acc0d87870888b5a4f
|
{
"intermediate": 0.3343488276004791,
"beginner": 0.2935584783554077,
"expert": 0.37209266424179077
}
|
13,635
|
HELLO
|
dbc4a579bc77cbafff9c162aa78cd37d
|
{
"intermediate": 0.3374614715576172,
"beginner": 0.2841505706310272,
"expert": 0.37838801741600037
}
|
13,636
|
import pandas as pd
import cv2
chexset_columns = ["Enlarged Cardiomediastinum", "Cardiomegaly", "Lung Opacity", "Lung Lesion", "Edema",
"Consolidation", "Pneumonia", "Atelectasis", "Pneumothorax", "Pleural Effusion",
"Pleural Other", "Fracture", "No Finding"]
# Create a new column in the NIH dataset called "Updated Label"
NIH["Updated Label"] = ""
NIH["OriginalImage_CheXpert[Width\tHeight]"] = ""
# Create a dictionary to store the Chex image paths and corresponding width and height
chex_paths = {column: [] for column in chexset_columns}
chex_width_height = {}
for chex_index, chex_row in Chex.iterrows():
for column in chexset_columns:
if chex_row[column] == 1:
# Update the "Image Index" in the NIH dataset
image_index = chex_row['Path'].replace("view1_frontal.jpg", "00000001_000.png")
chex_paths[column].append(image_index)
# Read the image to get the width and height
img = cv2.imread(chex_row['Path'])
if img is not None:
width = img.shape[1]
height = img.shape[0]
chex_width_height[image_index] = f"{width}\t{height}"
# Create a new DataFrame to store the new rows
new_rows = []
# Iterate over each row in the NIH dataset
for index, row in NIH.iterrows():
finding_labels = row['Finding Labels'].split('|') # Split multiple finding labels if present
for finding_label in finding_labels:
if finding_label in chex_paths:
updated_paths = chex_paths[finding_label]
for path in updated_paths:
# Create a new row with the matched information
new_row = {
"Image Index": path,
"Finding Labels": finding_label,
"Patient Age": row["Patient Age"],
"Patient Gender": row["Patient Gender"],
"View Position": row["View Position"],
"OriginalImage[Width\tHeight]": chex_width_height.get(path, "")
}
# Append the new row to the list of new rows
new_rows.append(new_row)
# Append the new rows to the NIH dataset
merged_dataset = pd.concat([NIH, pd.DataFrame(new_rows)], ignore_index=True)
merged_dataset.head()
can you optimise this code?
|
39766a3b47854821de49dbccbda65142
|
{
"intermediate": 0.4034283757209778,
"beginner": 0.44135403633117676,
"expert": 0.1552175134420395
}
|
13,637
|
In ggplot2, how to generate the Bar plot for each category?
|
a4c483a994f6766c788327f4a9031c66
|
{
"intermediate": 0.3504861891269684,
"beginner": 0.25092098116874695,
"expert": 0.39859285950660706
}
|
13,638
|
I need HTML file contains chatbox and Javascript for analyze text with user typing and file contains format
%word% %synonym1% %synonym2%
If user type something like "I want to buy cheap laptop"
And file contains
%cheap% %non-expensive%
%laptop% %notebook% %mobile pc% %mobile computer%
Answer must be "I want to buy non-expensive (rand)" where (rand) = random word from array "notebook", "mobile pc", "mobile computer"
|
47ef5682a23b3066cd375d54714d29c7
|
{
"intermediate": 0.4364120066165924,
"beginner": 0.28078851103782654,
"expert": 0.2827994227409363
}
|
13,639
|
write python code to print a square grid
|
15107b6ff13784ba4237f16e4ddc80b1
|
{
"intermediate": 0.4188372790813446,
"beginner": 0.23881575465202332,
"expert": 0.3423468768596649
}
|
13,640
|
Act as a SQL terminal
|
68d8b6fa413353ba96b95b4bcfc3cc6c
|
{
"intermediate": 0.13144126534461975,
"beginner": 0.7284982204437256,
"expert": 0.14006058871746063
}
|
13,641
|
how to generate a bar plot with jitter data points in ggplot2? Please provide example code.
|
d9ab7cec4e6721e1e0a6434a46baa150
|
{
"intermediate": 0.5504034161567688,
"beginner": 0.15340524911880493,
"expert": 0.29619133472442627
}
|
13,642
|
hi
|
5e25350d4d2fd322f3a0c234d3476fd4
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
13,643
|
explain this code :
onChangeRelease() {
this.isNoData = false;
this.releaseName = this.listReleases.find(x => x.uuid == this.selectedReleaseUuid)?.name;
const data = this.dataSource.find(x => x.releaseName == this.releaseName);
this.sum = this.testingType == 'bugReport' ? this.getNumber(data.bugReport.totalBug) : this.getNumber(data.testResult.totalTestCase);
this.chartConfig.data = this.buildChartData(data);
if ((this.chartConfig.data.datasets[0].data.some(x => x == null)) || this.chartConfig.data.datasets[0].data.every(x => x == 0)) {
this.isNoData = true;
}
this.listLegends = this.chartConfig.data.labels.map((legendLabel: string, index: number) => {
return {
text: legendLabel,
data: this.chartConfig.data.datasets[0].data[index] || 0,
fillStyle: this.chartConfig.data.datasets[0].backgroundColor[index]
}
});
this.chartEventTrigger.next(ChartEventEnum.UPDATE);
}
|
ae6c495e1e5560cf05172db556abf524
|
{
"intermediate": 0.39142537117004395,
"beginner": 0.3065262734889984,
"expert": 0.30204832553863525
}
|
13,644
|
@TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
|
deca4c36337ea95c3358707b3880a1d9
|
{
"intermediate": 0.45114317536354065,
"beginner": 0.252689927816391,
"expert": 0.29616692662239075
}
|
13,645
|
i need to create an application where i rate limit the api calls using springboot. I found out that using bucket4j is great. The problem is that i will need to make it a docker container and deploy it on kubernetes, so i will need to find a solution to rate limit different containers based on the same api call, is redis a good option? How would it work?
|
372ca186982b64d143ede669d18f196a
|
{
"intermediate": 0.7791491150856018,
"beginner": 0.10315150022506714,
"expert": 0.11769934743642807
}
|
13,646
|
python-docx write tamil text
|
eacd0791118c3eaa4070af1db203dd78
|
{
"intermediate": 0.3490857779979706,
"beginner": 0.34392672777175903,
"expert": 0.3069874942302704
}
|
13,647
|
i need to create an application where i have to rate limit api calls. I saw that using bucket4j would be enough. The problem is that i will need to create multiple docker containers distributed on kubernetes and bucket4j would not provide ratelimit for all the containers. Redis should be the solution. How would it work and how can i integrate the two? Show me an example of a springboot application using these dependencies and create an easy example with an api rate limited
|
dcc4af4da9d9c553d3ec36dc20a9ab02
|
{
"intermediate": 0.8114748597145081,
"beginner": 0.08482684940099716,
"expert": 0.10369827598333359
}
|
13,648
|
Can you create a python3 script for me that would work to create a zip archive of the files created the previous day in an identified folder and that would delete the oldest zip files from this identified folder as soon as the linux system only has 1GB of memory left? 'disk space ?
|
78ca0e764560c81a66e92c6514c5c3d8
|
{
"intermediate": 0.459800660610199,
"beginner": 0.13830137252807617,
"expert": 0.40189802646636963
}
|
13,649
|
Can you create a python3 script for me that would work to create a zip archive of the files created the previous day in an identified folder and that would delete the oldest zip files from this identified folder as soon as the linux system only has 1GB of memory left? 'disk space ?
|
ca9e632dd86c95f02b8c540a4cd62b23
|
{
"intermediate": 0.459800660610199,
"beginner": 0.13830137252807617,
"expert": 0.40189802646636963
}
|
13,650
|
how to write tamil text in docx by python
|
6553b785c316d2831d3e23271411a678
|
{
"intermediate": 0.3826369345188141,
"beginner": 0.30589818954467773,
"expert": 0.311464786529541
}
|
13,651
|
write the C code of implementation of spinlock in freertos
|
2966c4b413193591ab8216b6ae35ce4e
|
{
"intermediate": 0.20334236323833466,
"beginner": 0.2026287168264389,
"expert": 0.5940289497375488
}
|
13,652
|
Can you create me a python3 script that archives all mp4 files in an identified folder created the previous day and deletes the oldest zip archives from the same folder when the Linux system reaches less than 1GB of storage.
This script should run uninterrupted as a Linux system service.
|
8d01b4510775a9f8f60294c553f378a4
|
{
"intermediate": 0.4322541356086731,
"beginner": 0.19921797513961792,
"expert": 0.3685278296470642
}
|
13,653
|
Can you create me a python3 script that archives all mp4 files in an identified folder created the previous day and deletes the oldest zip archives from the same folder when the Linux system reaches less than 1GB of storage.
This script should run uninterrupted as a Linux system service.
|
ca14fd9b04f8f31579a207236d0de31b
|
{
"intermediate": 0.4322541356086731,
"beginner": 0.19921797513961792,
"expert": 0.3685278296470642
}
|
13,654
|
write the C code of implementation of spinlock in freertos
|
73f6f5b399383072ba401b9e42a43de8
|
{
"intermediate": 0.20334236323833466,
"beginner": 0.2026287168264389,
"expert": 0.5940289497375488
}
|
13,655
|
use .ttf font to docx file python-docx
|
6c367e26aba4791aa2a224a4d744418b
|
{
"intermediate": 0.323946088552475,
"beginner": 0.281604528427124,
"expert": 0.3944493532180786
}
|
13,656
|
what is python
|
91bc8cc67151874c3c297a016521abfb
|
{
"intermediate": 0.24778734147548676,
"beginner": 0.359560489654541,
"expert": 0.3926522135734558
}
|
13,657
|
i need to create an application where i have to rate limit api calls. I saw that using bucket4j would be enough. The problem is that i will need to create multiple docker containers distributed on kubernetes and bucket4j would not provide ratelimit for all the containers. Redis should be the solution. How would it work and how can i integrate the two? Show me an example of a springboot application using these dependencies and create an easy example with an api rate limited
|
5823cc205d810762fc1486f20f19bbfb
|
{
"intermediate": 0.8114748597145081,
"beginner": 0.08482684940099716,
"expert": 0.10369827598333359
}
|
13,658
|
How to find all positions of a concave area in a 2d grid
|
7c8131b30d395489db6d3532df174e5b
|
{
"intermediate": 0.19931575655937195,
"beginner": 0.25429290533065796,
"expert": 0.5463913679122925
}
|
13,659
|
export async function createUser(agent: AgentList, action: Action = Action.CreateUser): Promise<string> {
const login = genereteString();
await sshService.execCommandKey(agent.ip, Project.Rpoint, `sudo useradd ${login}`);
await OperationRepeater.RepeatUntil(
async () => (await sshService.execCommandKey(agent.ip, Project.Rpoint, 'compgen -u')).stdout.includes(login) === true,
10,
1,
);
await sshService.execCommandKey(agent.ip, Project.Rpoint, `echo -e "testsRRTT!@\ntestsRRTT!@\n" | sudo passwd ${login}`);
if (action === Action.LockUser) {
await sshService.execCommandKey(agent.ip, Project.Rpoint, `sudo usermod --lock ${login}`);
} else if (action === Action.UnlockUser) {
await sshService.execCommandKey(agent.ip, Project.Rpoint, `sudo usermod --lock ${login}`);
await sshService.execCommandKey(agent.ip, Project.Rpoint, `sudo usermod --unlock ${login}`);
} else if (action === Action.PasswordUser) {
const passwd = 'testPASSWD123!';
await sshService.execCommandKey(agent.ip, Project.Rpoint, `echo -e "${passwd}\n${passwd}\n" | sudo passwd ${login}`);
}
await sshService.execCommandKey(agent.ip, Project.Rpoint, `sudo userdel ${login}`);
return login;
}
эта функция для выполнение команд с linux, но теперь мне нужно подключаться к Windows, подскажи на какие команды нужно заменить для проверки
|
6d76babf9c63191c1efbac99a9875835
|
{
"intermediate": 0.412422776222229,
"beginner": 0.34907078742980957,
"expert": 0.23850642144680023
}
|
13,660
|
in ggplot, how to bold the title from facet_wrap?
|
b1b5d1867b441010c90ff477719ab4f3
|
{
"intermediate": 0.30909061431884766,
"beginner": 0.22771060466766357,
"expert": 0.46319881081581116
}
|
13,661
|
write a spring page that displays cars in a car dealership
|
9a33f0ce79b3c872719607c1c503da26
|
{
"intermediate": 0.34971094131469727,
"beginner": 0.3142262399196625,
"expert": 0.3360629081726074
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.