instruction
stringlengths
0
30k
βŒ€
|oracle|oracle-apex|apex|
You can use [`Comparator.comparing()`][1] > Accepts a function that extracts a Comparable sort key from a type T, and returns a Comparator<T> that compares by that sort key. and [`Comparator.thenComparing()`][2] > Returns a lexicographic-order comparator with a function that extracts a key to be compared with the given Comparator. to easily build a comparator prioritizing `first` in comparison, and then `second` if `first` are equal. ``` Comparator<Pair> comparator = Comparator.comparing(Pair::first)//compare first case-sensitive .thenComparing(Pair::second, String.CASE_INSENSITIVE_ORDER);//case-insensitive comparison of second if first are equal ``` [1]: https://docs.oracle.com/javase/8/docs/api/java/util/Comparator.html#comparing-java.util.function.Function- [2]: https://docs.oracle.com/javase/8/docs/api/java/util/Comparator.html#thenComparing-java.util.function.Function-java.util.Comparator-
null
I know this is a fairly simple and noobish question, but here it goes. I have been trying to create a log-in screen with blazor.However, when I click login button, I cannot save the values.(null when I debug) Login Page Code: ```xml @page "/" @attribute [StreamRendering] @using System.ComponentModel.DataAnnotations @using project1.Components.Models @using project1.Services @inject IAtLoginPage AtLoginPage @inject ILogedIn LogedIn @inject NavMenuViewService ViewOption <h1>LOGIN</h1> <EditForm Model="@loginModel" OnValidSubmit=@ValidFormSubmitted OnInvalidSubmit=@InvalidFormSubmitted FormName="LoginForm"> <DataAnnotationsValidator /> <div class="form-group"> <label for="username">Username:</label> <InputText @bind-Value="loginModel.Username" class="form-control id="Username" /> <ValidationMessage For="@(() => loginModel.Username)" /> </div> <div class="form-group"> <label for="password">Password:</label> <InputText id="password" @bind-Value="loginModel.Password" type="password" class="form-control" /> <ValidationMessage For="@(() => loginModel.Password)" /> </div> <input type="submit" class="btn btn-primary" value="Login" /> </EditForm> @code { protected LoginModel loginModel; string LastSubmitResult; public void LoginCheck() { string dummyUser = "admin"; string dummyPass = "1234"; Console.WriteLine("Login Check"); if ((loginModel.Username != null && loginModel.Password != null) && (dummyUser == loginModel.Username && dummyPass == loginModel.Password)) { LogedIn.logInChecker = true; } } void ValidFormSubmitted(EditContext editContext) { LastSubmitResult = "OnValidSubmit was executed"; LoginCheck(); } void InvalidFormSubmitted(EditContext editContext) { LastSubmitResult = "OnInvalidSubmit was executed"; } protected override void OnInitialized() { loginModel = new LoginModel(); } } ``` My Model ```csharp namespace project1.Components.Models { public class LoginModel { public string? Username { get; set; } public string? Password { get; set; } public bool RememberMe { get; set; } } } ``` Thanks in advance! I click login button, I cannot save the values.(null when I debug)
We are using Gigya as our log in service on our SAP Commerce site. This is the code that is executed when a user clicks the log in button: ssoLogin(language: string): void { gigya.sso.login({ authFlow: "redirect", context: { appName: "eCom", lang: language }, useChildContext: true }); } We have noticed that when the user logs in they are first directed back to the log in page and then to the actual store they are logging in to. How can we make it so that they're not directed to the log in page but are directed straight to the store? We have tried changing authFlow from "redirect" to "popup" and we have also tried passing a redirect url in the method call, but neither option stopped gigya from directing the user to the log in page after logging in.
Stop Gigya log in from directing back to the log in page
you should make `src/Routes/routes.tsx`: `export default Routers;` and `src/indedx.tsx`: `import Routers from './routes';`
null
I'm solving transportation problem with `pulp`. In the end, I receive a list of routes with their varvalues and the value of the objective function (total transportation costs). But how do I get not only varvalue, but also the result of the objective function for every route? (varvalue multiplied by costs for every route) ``` python for x in prob.variables(): if x.varValue > 0: print(x.name, "=", x.varValue) ``` I added `prob.objective()` to the print, but I do not receive the results
null
You can use classList property with **requestAnimationFrame**: ```js requestAnimationFrame(() => this.classList.toggle('active')) ``` > **requestAnimationFrame** allows the passed function to be executed in the > nearest browser animation frame to ensure smoother and more efficient > execution of animations and changes on the page. Another example: https://dev.to/genakalinovskiy/collapse-with-requestanimationframe-api
my server is built on aws cloud infra. And I want to check the jwt token in the lambda authorizer based on node.js via api gateway when the client sends the request, and if the token is valid, I would like to decode the token to add the same value as the user's id to the client's request header. So far, the authorized part in lambda authorizer has been successful. Also, I added event.heads["userId"] = userId; to lambda, but the spring server in my vpc link is not receiving the corresponding header. What should I do? client -> api gateway -> lambda authorizer(authorize and add userId header) -> my vpc-link server I have tried various things, such as creating an integrated request mapping template, but it does not work as I want. Also, when I create a mapping template, the request does not reach the spring server when I send the request, and I get an error response from apigateway as follows: {"message": "internal server error" }
add custom header at aws lambda authorizer
|amazon-web-services|aws-lambda|aws-api-gateway|api-gateway|lambda-authorizer|
null
We migrate from Spring v2.7.x to v3.2.2 which implies the migration from Hibernate v5.x. to v6.4.4.Final. As a DB we (have to) use H2 v2.2.224. We have the following domain model structure beginning with generic entities: @MappedSuperclass public abstract class EntityBase implements Serializable { private Long id; protected UUID uuid; public EntityBase() { uuid = UUID.randomUUID(); } @GeneratedValue(strategy = GenerationType.AUTO) @Id public Long getId() { return id; } protected void setId(final Long id) { this.id = id; } @Column(unique = true) @NotNull @JdbcTypeCode(SqlTypes.CHAR) public UUID getUuid() { return uuid; } protected void setUuid(final UUID uuid) { this.uuid = uuid; } //getters, setters etc. } ``` @Entity @Table(name = "FachparameterBaseReference") @Inheritance(strategy = InheritanceType.JOINED) public abstract class FachparameterBaseReference<FACHID extends FachparameterFachId<FACHID>> extends EntityBase { protected FACHID fachId; //other fields @Transient public FACHID getFachId(){ return fachId; } //setter for fachId, getters/setters etc. for other fields. ``` ``` @Entity @Table(name = "FachparameterReference") public abstract class FachparameterReference<FACHID extends FachparameterFachId<FACHID>, FACHCODE extends FachparameterFachCode<FACHCODE>> extends FachparameterBaseReference<FACHID> { private FACHCODE fachCode; @Transient public FACHCODE getFachCode(){ return fachCode; } @Override public FACHID getFachId() { return fachId; } //setters for FACHID, FACHCODE, other fields with getters/setters etc. } ``` Further we have a couple of specific entities which extend the above mentioned generic ones respectively. Below are **two** examples of each generic entity type (FachparameterReference and FachparameterBaseReference): ``` @Entity @Table(name = "ZuordnungReference") public class ZuordnungReference extends FachparameterBaseReference<ZuordnungFachId> { private ZuordnungFachId fachId; @Convert(converter = ZuordnungFachIdConverter.class) @Override public ZuordnungFachId getFachId() { return fachId; } //other fields with getters/setters and setter for fachId } ``` ``` @Entity @Table(name = "MessungReference") public class MessungReference extends FachparameterBaseReference<MessungFachId> { private MessungFachId fachId; @Convert(converter = MessungFachIdConverter.class) @Override public MessungFachId getFachId() { return fachId; } //other fields with getters/setters and setter for fachId } ``` ``` @Entity @Table(name = "FehlerReference") public class FehlerReference extends FachparameterReference<FehlerFachId, FehlerFachCode> { private FehlerFachId fachId; private FehlerFachCode fachCode; @Convert(converter = FehlerFachIdConverter.class) @Override public FehlerFachId getFachId() { return fachId; } @Convert(converter = FehlerFachCodeConverter.class) @Override public FehlerFachCode getFachCode() { return fachCode; } //other fields with getters/setters and setters for fachId and fachCode ``` ``` @Entity @Table(name = "FunktionReference") public class FunktionReference extends FachparameterReference<FunktionFachId, FunktionFachCode> { private FunktionFachId fachId; private FunktionFachCode fachCode; @Convert(converter = FunktionFachIdConverter.class) @Override public FunktionFachId getFachId() { return fachId; } @Convert(converter = FunktionFachCodeConverter.class) @Override public FunktionFachCode getFachCode() { return fachCode; } //other fields with getters/setters and setters for fachId and fachCode ``` Then we have the following generic repositories: ``` @NoRepositoryBean public interface FachparameterBaseRepository<FACHID extends FachparameterFachId<FACHID>, ENTITY extends FachparameterBaseReference<FACHID>> extends JpaRepository<ENTITY, Long> { @Query("SELECT f FROM #{#entityName} f WHERE f.fachId = :fachId") @QueryHints(value = { @QueryHint(name = HibernateHints.HINT_FLUSH_MODE, value = "COMMIT") }) Optional<ENTITY> findByFachId(FACHID fachId); } ``` ``` public interface FachparameterRepository<FACHID extends FachparameterFachId<FACHID>, FACHCODE extends FachparameterFachCode<FACHCODE>, ENTITY extends FachparameterReference<FACHID, FACHCODE>> extends FachparameterBaseRepository<FACHID, ENTITY> { @Deprecated @Query("SELECT f FROM #{#entityName} f WHERE f.fachCode = :fachcode") @QueryHints(value = { @QueryHint(name = org.hibernate.annotations.QueryHints.FLUSH_MODE, value = "COMMIT") }) Optional<ENTITY> findByFachcode(FACHCODE fachcode); } ``` `FachparameterFachId` and `FachparameterFachCode` are interfaces which have multiple implementations as Java record classes (each example below): ``` public interface FachparameterFachId<T extends FachparameterFachId<T>> extends CharSequence, Comparable<T>, Serializable { String fachId(); } ``` ``` public record FachparameterFachIdImpl(String fachId) implements FachparameterFachId<FachparameterFachIdImpl> { public FachparameterFachIdImpl { if (fachId.isBlank()) { throw new IllegalArgumentException(); } } public static FachparameterFachIdImpl of(final String value) { return new FachparameterFachIdImpl(value); } @Override public String toString() { return fachId(); } } ``` ``` public interface FachparameterFachCode<T extends FachparameterFachCode<T>> extends CharSequence, Comparable<T>, Serializable { String fachCode(); } ``` ``` public record FachparameterFachCodeImpl(String fachCode) implements FachparameterFachCode<FachparameterFachCodeImpl> { public FachparameterFachCodeImpl { if (fachCode.isBlank()) { throw new IllegalArgumentException(); } } public static FachparameterFachCodeImpl of(final String value) { return new FachparameterFachCodeImpl(value); } @Override public String toString() { return fachCode(); } } ``` And to the last the following specific repositories (among a couple of other specific ones): ``` @Repository public interface ZuordnungRepository extends FachparameterBaseRepository<ZuordnungFachId, ZuordnungReference> { } ``` ``` @Repository public interface FehlerRepository extends FachparameterRepository<FehlerFachId, FehlerFachCode, FehlerReference> { } ``` When starting the application the following exception is being raised: ``` Caused by: java.lang.IllegalArgumentException: Validation failed for query for method public abstract java.util.Optional FachparameterRepository.findByFachcode(FachparameterFachCode) at org.springframework.data.jpa.repository.query.SimpleJpaQuery.validateQuery(SimpleJpaQuery.java:100) at org.springframework.data.jpa.repository.query.SimpleJpaQuery.<init>(SimpleJpaQuery.java:70) at org.springframework.data.jpa.repository.query.JpaQueryFactory.fromMethodWithQueryString(JpaQueryFactory.java:60) at org.springframework.data.jpa.repository.query.JpaQueryLookupStrategy$DeclaredQueryLookupStrategy.resolveQuery(JpaQueryLookupStrategy.java:170) at org.springframework.data.jpa.repository.query.JpaQueryLookupStrategy$CreateIfNotFoundQueryLookupStrategy.resolveQuery(JpaQueryLookupStrategy.java:252) at org.springframework.data.jpa.repository.query.JpaQueryLookupStrategy$AbstractQueryLookupStrategy.resolveQuery(JpaQueryLookupStrategy.java:95) at org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.lookupQuery(QueryExecutorMethodInterceptor.java:111) ... 69 common frames omitted Caused by: java.lang.IllegalArgumentException: org.hibernate.query.PathException: Could not resolve attribute 'fachCode' of 'FachparameterReference' due to the attribute being declared in multiple subtypes 'FunktionReference' and 'FehlerReference' at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:143) at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:167) at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:173) at org.hibernate.internal.AbstractSharedSessionContract.createQuery(AbstractSharedSessionContract.java:848) at org.hibernate.internal.AbstractSharedSessionContract.createQuery(AbstractSharedSessionContract.java:753) at org.hibernate.internal.AbstractSharedSessionContract.createQuery(AbstractSharedSessionContract.java:136) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerInvocationHandler.invoke(ExtendedEntityManagerCreator.java:364) at jdk.proxy2/jdk.proxy2.$Proxy168.createQuery(Unknown Source) at org.springframework.data.jpa.repository.query.SimpleJpaQuery.validateQuery(SimpleJpaQuery.java:94) ... 75 common frames omitted Caused by: org.hibernate.query.PathException: Could not resolve attribute 'fachCode' of 'FachparameterReference' due to the attribute being declared in multiple subtypes 'FunktionReference' and 'FehlerReference' at org.hibernate.metamodel.model.domain.internal.EntityTypeImpl.findSubtypeAttribute(EntityTypeImpl.java:197) at org.hibernate.metamodel.model.domain.internal.EntityTypeImpl.findSubPathSource(EntityTypeImpl.java:174) at org.hibernate.query.sqm.SqmPathSource.getSubPathSource(SqmPathSource.java:92) at org.hibernate.query.sqm.tree.domain.AbstractSqmPath.get(AbstractSqmPath.java:202) at org.hibernate.query.sqm.tree.domain.AbstractSqmFrom.resolvePathPart(AbstractSqmFrom.java:196) at org.hibernate.query.hql.internal.DomainPathPart.resolvePathPart(DomainPathPart.java:42) at org.hibernate.query.hql.internal.BasicDotIdentifierConsumer.consumeIdentifier(BasicDotIdentifierConsumer.java:91) at org.hibernate.query.hql.internal.SemanticQueryBuilder.visitSimplePath(SemanticQueryBuilder.java:5060) at org.hibernate.query.hql.internal.SemanticQueryBuilder.visitIndexedPathAccessFragment(SemanticQueryBuilder.java:5009) at org.hibernate.query.hql.internal.SemanticQueryBuilder.visitGeneralPathFragment(SemanticQueryBuilder.java:4984) at org.hibernate.query.hql.internal.SemanticQueryBuilder.visitGeneralPathExpression(SemanticQueryBuilder.java:1776) at org.hibernate.grammars.hql.HqlParser$GeneralPathExpressionContext.accept(HqlParser.java:7699) at org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visitChildren(AbstractParseTreeVisitor.java:46) at org.hibernate.grammars.hql.HqlParserBaseVisitor.visitBarePrimaryExpression(HqlParserBaseVisitor.java:756) at org.hibernate.grammars.hql.HqlParser$BarePrimaryExpressionContext.accept(HqlParser.java:7157) at org.hibernate.query.hql.internal.SemanticQueryBuilder.createComparisonPredicate(SemanticQueryBuilder.java:2429) at org.hibernate.query.hql.internal.SemanticQueryBuilder.visitComparisonPredicate(SemanticQueryBuilder.java:2392) at org.hibernate.query.hql.internal.SemanticQueryBuilder.visitComparisonPredicate(SemanticQueryBuilder.java:269) at org.hibernate.grammars.hql.HqlParser$ComparisonPredicateContext.accept(HqlParser.java:6164) at org.hibernate.query.hql.internal.SemanticQueryBuilder.visitWhereClause(SemanticQueryBuilder.java:2244) at org.hibernate.query.hql.internal.SemanticQueryBuilder.visitWhereClause(SemanticQueryBuilder.java:269) at org.hibernate.grammars.hql.HqlParser$WhereClauseContext.accept(HqlParser.java:5905) at org.hibernate.query.hql.internal.SemanticQueryBuilder.visitQuery(SemanticQueryBuilder.java:1159) at org.hibernate.query.hql.internal.SemanticQueryBuilder.visitQuerySpecExpression(SemanticQueryBuilder.java:941) at org.hibernate.query.hql.internal.SemanticQueryBuilder.visitQuerySpecExpression(SemanticQueryBuilder.java:269) at org.hibernate.grammars.hql.HqlParser$QuerySpecExpressionContext.accept(HqlParser.java:1869) at org.hibernate.query.hql.internal.SemanticQueryBuilder.visitSimpleQueryGroup(SemanticQueryBuilder.java:926) at org.hibernate.query.hql.internal.SemanticQueryBuilder.visitSimpleQueryGroup(SemanticQueryBuilder.java:269) at org.hibernate.grammars.hql.HqlParser$SimpleQueryGroupContext.accept(HqlParser.java:1740) at org.hibernate.query.hql.internal.SemanticQueryBuilder.visitSelectStatement(SemanticQueryBuilder.java:443) at org.hibernate.query.hql.internal.SemanticQueryBuilder.visitStatement(SemanticQueryBuilder.java:402) at org.hibernate.query.hql.internal.SemanticQueryBuilder.buildSemanticModel(SemanticQueryBuilder.java:311) at org.hibernate.query.hql.internal.StandardHqlTranslator.translate(StandardHqlTranslator.java:71) at org.hibernate.query.internal.QueryInterpretationCacheStandardImpl.createHqlInterpretation(QueryInterpretationCacheStandardImpl.java:165) at org.hibernate.query.internal.QueryInterpretationCacheStandardImpl.resolveHqlInterpretation(QueryInterpretationCacheStandardImpl.java:147) at org.hibernate.internal.AbstractSharedSessionContract.interpretHql(AbstractSharedSessionContract.java:790) at org.hibernate.internal.AbstractSharedSessionContract.createQuery(AbstractSharedSessionContract.java:840) ... 84 common frames omitted ``` What are we missing? Why is this exception being raised? Any hints are highly appreciated!
In DB Stored procedure i am trying to insert data into multiple tables together with normal insert query. But if i am getting more service call at the same time. Data is not inserting in all the tables and skipping it and moving to the another request. Required solution at procedure level. Required solution at procedure level INSERT INTO RepurchaseMaster_Reg (RegId, SprRegId, SalesTo, FId, BillNo, BillDate, TotQty, TotPV, Status, SesId, UqId, IpAddress, TotalDP, TotUnitPrice, CommDate, Remarks, CBPayNo, MemType, BillType, OrdType, TaxType, TaxSubType, SaleType, TotGross, TotNet, TotValue, TotCGST, TotSGST, TotIGST, SplOffCode, SplOffAmt, TotalBP, TaxAmt, NetAmt, UpdatedDate, PayableAmount, TotMRP, RepType) SELECT @RegId, @SprRegId, @SalesTo, @FId, @BillNo, @BillDate, isnull(@TotQty,0), @TotPV, @Status, @SessId, @UqId, @IpAddress, @TotalDP, @TotUnitPrice, @CommDate, @Remarks, @CBPayNo, @MemType, @BillType, @OrdType, @TaxType, @TaxSubType, @SaleType, @TotGross, @TotNet, @TotValue, @TotCGST, @TotSGST, @TotIGST, @SplOffCode, @SplOffAmt, 0, 0, 0, GETDATE(), @PayableAmt, @TotMRP, @RepType SET @RMid = @@identity IF @BillType = 2 BEGIN INSERT INTO ProductCredits (RegId, Sprno, InAmt, OutAmt, Dated, Descr, Remarks, TypeOfInc, billno, rmid, ExpiredDate, LockedDate) SELECT 0 RegId, @RegId Sprno, @TotPV InAmt, 0 OutAmt, GETDATE() Dated, 'Product Purchase thru ID : ' + @BillNo Descr, 'Thru Product Purchase' Remarks, 'Product Purchase' TypeOfInc, @BillNo billno, @RMid, DATEADD(yyyy, 1, GETDATE()) ExpiredDate, DATEADD(yyyy, 1, GETDATE()) LockedDate UPDATE RepurchaseMaster_Reg SET TotPV = 0 WHERE rmid = @RMid END INSERT INTO RepurchaseItems_Reg (rmid, pid, qty, MRP, DP, UnitPrice, GrossValue, DisCount, NetValue, PV, GST, IGSTAmt, CGST, CGSTAmt, SGST, SGSTAmt, TotValue, BP, amount, BV , TaxAmt, CGSTTax, CGSTTaxAmt, SGSTTax, SGSTTaxAmt) SELECT @rmid, ISNULL(ic.ProdItemId, t.pid) pid, qty, MRP, DP, UnitPrice, GrossValue, DisCount, NetValue, PV, GST, IGSTAmt, CGST, CGSTAmt, SGST, SGSTAmt, TotValue, ISNULL(BP, 0), ISNULL(Amount, 0), BV, ISNULL(IGSTAmt, 0), CGST, CGSTAmt, SGST, SGSTAmt FROM tmpRPProductsItems t OUTER APPLY (SELECT TOP 1 ProdItemId FROM ProductItemCodes WHERE pid = t.pid) ic WHERE UQID = @UqId AND sessionid = @SessId AND RegId = @RegId AND FId = @FId INSERT INTO RepurchasePayments_Reg (Regid, RMid, Mop, MopNo, MopDate, MopBank, Amount, Sesid, DDNo, ChequeNo, vouchercode) SELECT @regid, @RMid, MOP, @MopRefNo, @Billdate, '', Amount, @sessid, DDNo, '', '' FROM TmpPayments WHERE Uniqid = @UqId AND SessId = @SessId AND RegId = @RegId AND FId = @FId IF (@ReqToType = 'Stores') BEGIN INSERT INTO RepurchaseReqToAddress_Reg (Rmid, Code, Name, Addr, State, District, City, Pincode, Mobile, Email, GSTNo) SELECT @RMid, 'Stores', Name, Address, State, District, City, LEFT(Pincode, 6), LEFT(Mobile, 15), Email, LEFT(GSTNo, 15) FROM StoreProfileAddress(NOLOCK) WHERE CountryId = @CountryId INSERT INTO RepurchaseAddress_Reg (Rmid, Name, Addr, State, District, City, Mobile, Email, Pincode, GSTNo) SELECT @RMid, @SName, @Addr, @StId, @District, @City, @Mobile, Email, @Pin, LEFT(GSTNo, 15) FROM StoreProfileAddress(NOLOCK) WHERE CountryId = @CountryId
For this purpose, when you need to track time while doing other actions, there're peripherals called Timers/Counters (they have many purposes but this is the basic one) in the MCU which can generate interrupts after a set time. This looks like a nice project to get started with them! In very simplified way you'll need to enable one and set its parameters, calculate the desired number of ticks (there're many calculators for it online, for example [this one][1]) to achieve your desired time and implement an interrupt routine which will be used as a notification showing that the set time has elapsed. But I believe you can find better information source about them like a datasheet/reference manual of your MCU or many already working examples from arduino. [1]: https://eleccelerator.com/avr-timer-calculator/
What is the polars equivalent of pandas.pivot_table.reindex? I am trying to convert my pandas code to polars and I hit a snag. Modified example from polars user guide ``` import polars as pl import pandas as pd df = pl.DataFrame( { "foo": ["A", "A", "B", "B", "C"], "N": [1, 2, 2, 4, 2], "bar": ["k", "l", "n", "n", "o"], } ) df_pl = df.pivot(index="foo", columns="bar", values="N", aggregate_function="max") print (df_pl) shape: (3, 5) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ foo ┆ k ┆ l ┆ n ┆ o β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ══════β•ͺ══════β•ͺ══════║ β”‚ A ┆ 1 ┆ 2 ┆ null ┆ null β”‚ β”‚ B ┆ null ┆ null ┆ 4 ┆ null β”‚ β”‚ C ┆ null ┆ null ┆ null ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ ``` equivalent code of pandas for the above polars pivot output ``` df_pd = pd.pivot_table(df.to_pandas(), values = "N", index = "foo", columns = ["bar"], aggfunc= 'max', sort=True) print (df_pd) bar k l n o foo A 1.0 2.0 NaN NaN B NaN NaN 4.0 NaN C NaN NaN NaN 2.0 ``` Now, I would like to reindex the pivot table in certain order. As you can see, pandas added the new column with all NaN's as the data for column "m" doesn't exist in the original df. ``` reindex_list = ['k', 'm', 'l', 'n', 'o'] df_pd_reindexed = df_pd.reindex(reindex_list, axis = 1) print (df_pd_reindexed) bar k m l n o foo A 1.0 NaN 2.0 NaN NaN B NaN NaN NaN 4.0 NaN C NaN NaN NaN NaN 2.0 ``` How to get similar output using polars? I tried the df.select method to order the columns, but it doesn't help in this scenario as the column is missing and polars throws an error which is obvious. Unless, I need to add the missing columns manually and populate them with nulls and then call the select method. I am looking for a simpler method than adding the missing columns manually. ``` df_pl = df_pl.select(["foo"] + reindex_list) Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Python39\lib\site-packages\polars\dataframe\frame.py", line 8097, in select return self.lazy().select(*exprs, **named_exprs).collect(_eager=True) File "C:\Python39\lib\site-packages\polars\lazyframe\frame.py", line 1730, in collect return wrap_df(ldf.collect()) polars.exceptions.ColumnNotFoundError: m Error originated just after this operation: DF ["foo", "k", "l", "n"]; PROJECT */5 COLUMNS; SELECTION: "None" ``` Any guidance is much appreciated. Thanks
polars equivalent of pandas.pivot_table.reindex
|python|pandas|dataframe|python-polars|
I have localized our Webflow site with EN (default) and FR. I need to regularly update the content of a CMS Collection by way of a Make automation for both EN & FR locale. The problem is that a CMS collection has a single ID for both locales. It is of paramount importance to be able to modify the content of CMS collection items programmatically. Thanks for any ressource or/and assistance in that regards. Using the Webflow UI, it is possible to modify the content of a CMS content after selecting a locale, but the API does not apparently differentiate the locales of a CMS collection.
|javascript|angular|authentication|sap-commerce-cloud|gigya|
null
|android|flutter|firebase|firebase-cloud-messaging|
null
null
in my work I use both STATA and R. Currently I want to perform a dose-response meta-analysis and use the "dosresmeta" package in R. The following syntax is used: ``` DRMeta \<- dosresmeta( formula = logRR \~ dose, type = type, cases = n_cases, n = n, lb = RR_lo, ub = RR_hi, id = ID, data = DRMetaAnalysis ) ``` When executing this syntax, however, I encounter a problem. The error message appears: `Error in if (delta \< tol) break : missing value where TRUE/FALSE needed.` The reason for this error message is that I am missing some values for the variables "n_cases" and "n", which the authors have not provided. Interestingly, STATA does not require this information for the calculation of the dose-response meta-analysis. Is there a way to perform the analysis in R without requiring the values for "n_cases" and "n"? What can I do if I have not given these values? I have already asked the authors for the missing values, unfortunately without success. However, I need these studies for the dose-response meta-analysis, so it is not an option to exclude them. Best regards David
I have a Spark job and every time I do a historical fix, my job fails because of memory limit exceed. While doing historical fix, I create a table in my temporary database and read it (for the first time) from there. I am unable to figure out exactly what's the reason. The only difference I can see is that the number of parquet files are 10 times higher during the historical fix run than the normal Spark job run. [Historical Fix Table Stats](https://i.stack.imgur.com/stNTm.png) [Normal Table Stats](https://i.stack.imgur.com/M2ixX.png) When I am checking Spark history server, I find the following logs in `stderr` ``` 24/03/26 16:12:23 INFO compress.CodecPool: Got brand-new compressor [.snappy] 24/03/26 16:12:24 INFO datasources.FileScanRDD: Reading File path: hdfs://namenode/warehouse/tablespace/external/hive/tempdb.db/some_table/eb4ce3cb9d5fa3a0-cffd3b5a00000004_650415028_data.17.parq, range: 0-125039486, partition values: [empty row] 24/03/26 16:12:24 INFO compat.FilterCompat: Filtering using predicate: noteq(scd_end, null) 24/03/26 16:12:24 INFO compat.FilterCompat: Filtering using predicate: noteq(scd_end, null) 24/03/26 16:12:24 INFO compat.FilterCompat: Filtering using predicate: noteq(scd_end, null) 24/03/26 16:12:40 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 510 24/03/26 16:12:40 INFO executor.Executor: Running task 5.2 in stage 20.0 (TID 510) 24/03/26 16:12:40 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2 24/03/26 16:12:40 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 24/03/26 16:12:40 INFO datasources.FileScanRDD: Reading File path: hdfs://namenode/warehouse/tablespace/external/hive/tempdb.db/some_table/eb4ce3cb9d5fa3a0-cffd3b5a00000000_1522645193_data.10.parq, range: 0-131168664, partition values: [empty row] 24/03/26 16:12:40 INFO compat.FilterCompat: Filtering using predicate: noteq(scd_end, null) 24/03/26 16:12:40 INFO compat.FilterCompat: Filtering using predicate: noteq(scd_end, null) 24/03/26 16:12:40 INFO compat.FilterCompat: Filtering using predicate: noteq(scd_end, null) 24/03/26 16:12:41 INFO compress.CodecPool: Got brand-new decompressor [.snappy] 24/03/26 16:12:46 ERROR executor.CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM ``` When I see the execution flow difference, I can see that in the normal run, it is able to scan the parquet file and show the number of rows fetched while it is not the case in the historical fix run. [Historical Fix Run](https://i.stack.imgur.com/ve661.png) [Normal Table Run](https://i.stack.imgur.com/D78uJ.png) I want to understand if I am thinking in the right direction. Based on the above logs and the DAG visualisation to me it seems that huge number of files is causing overhead for Spark and throwing memory overlimit. Is my understanding correct or am I completely thinking and debugging in the wrong direction. Tips are greatly welcomed for debugging memory related issues in Spark (as I am fairly new to Spark)
How to edit a localized Webflow CMS collection
|locale|webflow|
null
I have a npc class which has a method called `chase` that is referenced inside the `update` function. when `chase` is called it says its not a function or it's not defined. I have another method being referenced in the same way that works. chase() { this.Phaser.Physics.moveToObject(NPC_spr, player_spr, 100); this.anims.play('NPC',true); } chase method in npcClass.js bullet_ary = bulletGroup.getChildren(); for (let bullet_spr of bullet_ary) { bullet_spr.updateMe(); } NPC_ary = npcGroup.getChildren(); for (let npc_spr of NPC_ary) { npc_spr.chase(); } `chase` is called at `npc_spr.chase();` which is in the update function, this creates an error while above the `bullet_spr.updateMe()` works. `updateMe` is also a method. what I expected to happen was for the method to be called like the one above for the bullets. I have tried many different ways of rewriting it but everything results in the not defined or not a function error. Error in console - Uncaught TypeError: npc_spr.chase is not a function
author of Scuri here. So nice to see a discussion of my work. Thanks Azure-Techno for starting this thread and ekostadinov and Shabeer M as well for your words. And yes - it "only" does the mundane and repetitive work. It won't do the heavy-lift thought process - can't really. (Maybe chatGPq might?) Anyway - thanks!
Gleyce, I believe you watched Lira Hashtag. I'm also following the classes and I ran into the same situation. I basically ran all the codes again since of the beginning or you can just apply the "Run All". If that doesn't work, try to retire the "discrete_color" part.
I have used the below code to run a floating window forecast to predict variance: index = returns.index start_loc = 0 end_loc = np.where(index >= "2010-1-1")[0].min() forecasts = {} for i in range(20): sys.stdout.write(".") sys.stdout.flush() res = am.fit(first_obs=i, last_obs=i + end_loc, disp="off") temp = res.forecast(horizon=3).variance fcast = temp.iloc[0] forecasts[fcast.name] = fcast print() print(pd.DataFrame(forecasts).T) Is there also a possibility to obtain a history / parameters (params) for each day in the rolling window? Is there a similar object as ARCHModelForecast for this? I could not find this.
Arch_model rolling window forecasting
|arch|
null
Source: https://github.com/DominicHolmes/dot-globe I am trying to make the 3d globe from the repo above lock about the z axis. I want the globe to only rotate horizontally and ignore unwanted rotations. If it is possible I'd like to allow up to -30 degrees of rotation to the bottom of the globe, and 30 degrees of rotation to the top of the globe. I'm not very skilled with SCNScene or SCNCamera. Currently horizontal swipes also rotate the whole globe instead of spinning it. In the repo the code below was added in the function **setupCamera** to prevent unwanted globe rotations. But this does not work. ``` constraint.isGimbalLockEnabled = true cameraNode.constraints = [constraint] sceneView.scene?.rootNode.addChildNode(cameraNode) ``` I also tried doing this but it also didn't work. ``` let constraint = SCNTransformConstraint.orientationConstraint(inWorldSpace: true) { (_, orientation) -> SCNQuaternion in // Keep the same orientation around x and z axes, allow rotation around y-axis return SCNQuaternion(x: 0, y: orientation.y, z: 0, w: orientation.w) } ``` Here is the code to set up the camera (where these constraints should be added). The rest of the code is in the repository above. Everything relevant to this question and code is in the file linked above. ``` import SwiftUI import SceneKit typealias GenericControllerRepresentable = UIViewControllerRepresentable @available(iOS 13.0, *) private struct GlobeViewControllerRepresentable: GenericControllerRepresentable { var particles: SCNParticleSystem? = nil //@Binding public var showProf: Bool func makeUIViewController(context: Context) -> GlobeViewController { let globeController = GlobeViewController(earthRadius: 1.0)//, showProf: $showProf updateGlobeController(globeController) return globeController } func updateUIViewController(_ uiViewController: GlobeViewController, context: Context) { updateGlobeController(uiViewController) } private func updateGlobeController(_ globeController: GlobeViewController) { globeController.dotSize = CGFloat(0.005) globeController.enablesParticles = true if let particles = particles { globeController.particles = particles } } } @available(iOS 13.0, *) public struct GlobeView: View { //@Binding public var showProf: Bool public var body: some View { GlobeViewControllerRepresentable()//showProf: $showProf } } import Foundation import SceneKit import CoreImage import SwiftUI import MapKit public typealias GenericController = UIViewController public typealias GenericColor = UIColor public typealias GenericImage = UIImage public class GlobeViewController: GenericController { public var earthNode: SCNNode! private var sceneView : SCNView! private var cameraNode: SCNNode! private var worldMapImage : CGImage { guard let path = Bundle.module.path(forResource: "earth-dark", ofType: "jpg") else { fatalError("Could not locate world map image.") } guard let image = GenericImage(contentsOfFile: path)?.cgImage else { fatalError() } return image } private lazy var imgData: CFData = { guard let imgData = worldMapImage.dataProvider?.data else { fatalError("Could not fetch data from world map image.") } return imgData }() public var particles: SCNParticleSystem? { didSet { if let particles = particles { sceneView.scene?.rootNode.removeAllParticleSystems() sceneView.scene?.rootNode.addParticleSystem(particles) } } } public init(earthRadius: Double) { self.earthRadius = earthRadius super.init(nibName: nil, bundle: nil) } public init(earthRadius: Double, dotCount: Int) { self.earthRadius = earthRadius self.dotCount = dotCount super.init(nibName: nil, bundle: nil) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } public override func viewDidLoad() { super.viewDidLoad() setupScene() setupParticles() setupCamera() setupGlobe() setupDotGeometry() } private func setupScene() { var scene = SCNScene() sceneView = SCNView(frame: view.frame) sceneView.scene = scene sceneView.showsStatistics = true sceneView.backgroundColor = .black sceneView.allowsCameraControl = true self.view.addSubview(sceneView) } private func setupParticles() { guard let stars = SCNParticleSystem(named: "StarsParticles.scnp", inDirectory: nil) else { return } stars.isLightingEnabled = false if sceneView != nil { sceneView.scene?.rootNode.addParticleSystem(stars) } } private func setupCamera() { self.cameraNode = SCNNode() cameraNode.camera = SCNCamera() cameraNode.position = SCNVector3(x: 0, y: 0, z: 5) sceneView.scene?.rootNode.addChildNode(cameraNode) } private func setupGlobe() { self.earthNode = EarthNode(radius: earthRadius, earthColor: earthColor, earthGlow: glowColor, earthReflection: reflectionColor) sceneView.scene?.rootNode.addChildNode(earthNode) } private func setupDotGeometry() { let textureMap = generateTextureMap(dots: dotCount, sphereRadius: CGFloat(earthRadius)) let newYork = CLLocationCoordinate2D(latitude: 44.0682, longitude: -121.3153) let newYorkDot = closestDotPosition(to: newYork, in: textureMap) let dotColor = GenericColor(white: 1, alpha: 1) let oceanColor = GenericColor(cgColor: UIColor.systemRed.cgColor) let highlightColor = GenericColor(cgColor: UIColor.systemRed.cgColor) let threshold: CGFloat = 0.03 let dotGeometry = SCNSphere(radius: dotRadius) dotGeometry.firstMaterial?.diffuse.contents = dotColor dotGeometry.firstMaterial?.lightingModel = SCNMaterial.LightingModel.constant let highlightGeometry = SCNSphere(radius: dotRadius * 5) highlightGeometry.firstMaterial?.diffuse.contents = highlightColor highlightGeometry.firstMaterial?.lightingModel = SCNMaterial.LightingModel.constant let oceanGeometry = SCNSphere(radius: dotRadius) oceanGeometry.firstMaterial?.diffuse.contents = oceanColor oceanGeometry.firstMaterial?.lightingModel = SCNMaterial.LightingModel.constant var positions = [SCNVector3]() var dotNodes = [SCNNode]() var highlightedNode: SCNNode? = nil for i in 0...textureMap.count - 1 { let u = textureMap[i].x let v = textureMap[i].y let pixelColor = self.getPixelColor(x: Int(u), y: Int(v)) let isHighlight = u == newYorkDot.x && v == newYorkDot.y if (isHighlight) { let dotNode = SCNNode(geometry: highlightGeometry) dotNode.name = "NewYorkDot" dotNode.position = textureMap[i].position positions.append(dotNode.position) dotNodes.append(dotNode) highlightedNode = dotNode } else if (pixelColor.red < threshold && pixelColor.green < threshold && pixelColor.blue < threshold) { let dotNode = SCNNode(geometry: dotGeometry) dotNode.position = textureMap[i].position positions.append(dotNode.position) dotNodes.append(dotNode) } } DispatchQueue.main.async { let dotPositions = positions as NSArray let dotIndices = NSArray() let source = SCNGeometrySource(vertices: dotPositions as! [SCNVector3]) let element = SCNGeometryElement(indices: dotIndices as! [Int32], primitiveType: .point) let pointCloud = SCNGeometry(sources: [source], elements: [element]) let pointCloudNode = SCNNode(geometry: pointCloud) for dotNode in dotNodes { pointCloudNode.addChildNode(dotNode) } self.sceneView.scene?.rootNode.addChildNode(pointCloudNode) //this moves the camera to show the top of the earth DispatchQueue.main.asyncAfter(deadline: .now() + 3) { if let highlightedNode = highlightedNode { self.alignPointToPositiveZ(for: pointCloudNode, targetPoint: highlightedNode.position) } } } } func alignPointToPositiveZ(for sphereNode: SCNNode, targetPoint: SCNVector3) { // Compute normalized vector from Earth's center to the target point let targetDirection = targetPoint.normalized() // Compute quaternion rotation let up = SCNVector3(0, 0, 1) let rotationQuaternion = SCNQuaternion.fromVectorRotate(from: up, to: targetDirection) sphereNode.orientation = rotationQuaternion } typealias MapDot = (position: SCNVector3, x: Int, y: Int) private func generateTextureMap(dots: Int, sphereRadius: CGFloat) -> [MapDot] { let phi = Double.pi * (sqrt(5) - 1) var positions = [MapDot]() for i in 0..<dots { let y = 1.0 - (Double(i) / Double(dots - 1)) * 2.0 // y is 1 to -1 let radiusY = sqrt(1 - y * y) let theta = phi * Double(i) // Golden angle increment let x = cos(theta) * radiusY let z = sin(theta) * radiusY let vector = SCNVector3(x: Float(sphereRadius * x), y: Float(sphereRadius * y), z: Float(sphereRadius * z)) let pixel = equirectangularProjection(point: Point3D(x: x, y: y, z: z), imageWidth: 2048, imageHeight: 1024) let position = MapDot(position: vector, x: pixel.u, y: pixel.v) positions.append(position) } return positions } struct Point3D { let x: Double let y: Double let z: Double } struct Pixel { let u: Int let v: Int } func equirectangularProjection(point: Point3D, imageWidth: Int, imageHeight: Int) -> Pixel { let theta = asin(point.y) let phi = atan2(point.x, point.z) let u = Double(imageWidth) / (2.0 * .pi) * (phi + .pi) let v = Double(imageHeight) / .pi * (.pi / 2.0 - theta) return Pixel(u: Int(u), v: Int(v)) } private func distanceBetweenPoints(x1: Int, y1: Int, x2: Int, y2: Int) -> Double { let dx = Double(x2 - x1) let dy = Double(y2 - y1) return sqrt(dx * dx + dy * dy) } private func closestDotPosition(to coordinate: CLLocationCoordinate2D, in positions: [(position: SCNVector3, x: Int, y: Int)]) -> (x: Int, y: Int) { let pixelPositionDouble = getEquirectangularProjectionPosition(for: coordinate) let pixelPosition = (x: Int(pixelPositionDouble.x), y: Int(pixelPositionDouble.y)) let nearestDotPosition = positions.min { p1, p2 in distanceBetweenPoints(x1: pixelPosition.x, y1: pixelPosition.y, x2: p1.x, y2: p1.y) < distanceBetweenPoints(x1: pixelPosition.x, y1: pixelPosition.y, x2: p2.x, y2: p2.y) } return (x: nearestDotPosition?.x ?? 0, y: nearestDotPosition?.y ?? 0) } /// Convert a coordinate to an (x, y) coordinate on the world map image private func getEquirectangularProjectionPosition( for coordinate: CLLocationCoordinate2D ) -> CGPoint { let imageHeight = CGFloat(worldMapImage.height) let imageWidth = CGFloat(worldMapImage.width) // Normalize longitude to [0, 360). Longitude in MapKit is [-180, 180) let normalizedLong = coordinate.longitude + 180 // Calculate x and y positions let xPosition = (normalizedLong / 360) * imageWidth // Note: Latitude starts from top, hence the `-` sign let yPosition = (-(coordinate.latitude - 90) / 180) * imageHeight return CGPoint(x: xPosition, y: yPosition) } private func getPixelColor(x: Int, y: Int) -> (red: CGFloat, green: CGFloat, blue: CGFloat, alpha: CGFloat) { let data: UnsafePointer<UInt8> = CFDataGetBytePtr(imgData) let pixelInfo: Int = ((worldMapWidth * y) + x) * 4 let r = CGFloat(data[pixelInfo]) / CGFloat(255.0) let g = CGFloat(data[pixelInfo + 1]) / CGFloat(255.0) let b = CGFloat(data[pixelInfo + 2]) / CGFloat(255.0) let a = CGFloat(data[pixelInfo + 3]) / CGFloat(255.0) return (r, g, b, a) } } ```
you are getting this error due to this "observed package id 'platform tools' in inconsistent location ......." to remove this kind of error , go to your sdk file location eg . "C:\User\user\AppData\Local\Android\Sdk" and after going to this location, ou will se perform tools , perform tools 1, perform tools 2 ....., so remove the files which dont contain adb, fastboot etc and rename the file with this name perform tools . It worked in my case
This toy code: ```cpp union a { int b, c; }; template<int a::* p> void b() {} template<> void b< &a::b >() {} template<> void b< &a::c >() {} ``` Builds fine in clang and gcc, but in MSVC gives: > error C2766: explicit specialization; 'void b<pointer-to-member(0x0)>(void)' has already been defined Compiler-explorer link: https://godbolt.org/z/TfoMjv8ch As can be seen in CE, in gcc/clang the specialization names mangling encodes the union-member names: ``` _Z1bIXadL_ZN1a1bEEEEvv ^ _Z1bIXadL_ZN1a1cEEEEvv ^ ``` but in MSVC the mangling encodes just the member offset within the union (0), and therefore the specializations are considered duplicate: ``` ??$b@$0A@@@YAXXZ ^ ``` Is this an MSVC bug or undefined/unspecified behavior? Any pointers (hehe) to relevant standard clauses?
C++: Pointers to union-members considered distinct?
|c++|language-lawyer|
On Bigquery, I have a table with many columns. I want to create a new table showing counts of null and distinct records for each column. I am using the below code to get the nulls: (SELECT column_name, COUNT(1) AS null_count FROM t, UNNEST(REGEXP_EXTRACT_ALL(TO_JSON_STRING(t), r'"(\w+)":null')) column_name GROUP BY column_name) The above code saves me from specifying all column names or looping over them. Is it possible to find counts of distinct values for eac column too, using above approach? How can I modify the above code to also include counts of distinct values in each column?
BigQuery: Finding distinct counts for multiple columns without specifying column names
|sql|google-bigquery|distinct-values|
I am using functools.lru_cache to implement Fibonacci using memoization. But I do not understand the number of hits that my output generates. My code is as follows: ``` @lru_cache(maxsize=8) def fib(n): if n <2: return n else: return fib(n-1) + fib(n-2) print(fib(8)) print(fib.cache_info()) for i in range(9, 12): fib(i) print(fib.cache_info()) ``` The output is: ```none CacheInfo(hits=6, misses=9, maxsize=8, currsize=8) CacheInfo(hits=8, misses=10, maxsize=8, currsize=8) CacheInfo(hits=10, misses=11, maxsize=8, currsize=8) CacheInfo(hits=12, misses=12, maxsize=8, currsize=8) ``` I understand that the number of misses is 8 (`fib(0)`,...`fib(8)`) in the call `fib(8)` and the number of misses in all the subsequent calls. But the number of hits in `fib(9)`, `fib(10)`, `fib(11)` is not clear to me. If I understand correctly, in `fib(9)`, the hits will be for `fib(2)`...`fib(8)`. So, the number of hits will be 7. Is it due to the `maxsize=8`?
I want to implement a Google Mobile Ads in unity, and I see [RegisterEventHandlers](https://developers.google.com/admob/unity/rewarded#listen_to_rewarded_ad_events) to handle some event for `RewardedAd` and `InterstitialAd`. I've seen their [example](https://github.com/googleads/googleads-mobile-unity/tree/main/samples/HelloWorld) but they defined the `RegisterEventHandlers` one by one even though the event name is same, and I intend to create one `RegisterEventHandlers` to handle both `RewardedAd` and `InterstitialAd` event. > Please forgive me if it’s a basic question, I’ve only been learning Unity and c# for less than 3 weeks. I've tried to defined a `RegisterEventHandlers` but I'm stuck about how do I defined an input parameter that accept `RewardedAd` and `InterstitialAd` type like this ```csharp private static RewardedAd _rewardBaseVideo; private static InterstitialAd _interstitialAd; private void RequestAdmobRewardVideo() { //another code _rewardBaseVideo = ad; RegisterEventHandlers(ad); } private void RequestAdmobInterstitial() { //another code _interstitialAd = ad; RegisterEventHandlers(ad); } // how do I dynamically declaring `RewardedAd` and `InterstitialAd` input type private void RegisterEventHandlers(RewardedAd ad) { } ``` I've also tried to defined RegisterEventHandlers like this [SO Answer](https://stackoverflow.com/a/965593/8943429): ``` private void RegisterEventHandlers<T>(T ad) where T: RewardedAd, InterstitialAd { } ``` but I got an error `The class type constraint 'InterstitialAd' must come before any other constraints`. so how do i defined RegisterEventHandlers gracefully accept both `RewardedAd` and `InterstitialAd`?
Does having large number of files causes overhead on Spark?
|apache-spark|parquet|executor|
null
Here's a PowerShell script and an AutoHotkey script based on lulle2007200's and MaloW's answers --- SetBrightness.ps1 param( [Parameter(Mandatory = $true, Position = 0, HelpMessage = "Brightness value between 1.0 and 6.0")] [ValidateRange(1.0, 6.0)][double]$brightness ) Add-Type -TypeDefinition @" using System; using System.Runtime.InteropServices; public class ScreenBrightnessSetter { [DllImport("user32.dll")] public static extern IntPtr MonitorFromWindow(IntPtr hwnd, uint dwFlags); [DllImport("kernel32", CharSet = CharSet.Unicode)] public static extern IntPtr LoadLibrary(string lpFileName); [DllImport("kernel32", CharSet = CharSet.Ansi, ExactSpelling = true, SetLastError = true)] public static extern IntPtr GetProcAddress(IntPtr hModule, int address); public delegate void DwmpSDRToHDRBoostPtr(IntPtr monitor, double brightness); } "@ $primaryMonitor = [ScreenBrightnessSetter]::MonitorFromWindow([IntPtr]::Zero, 1) $hmodule_dwmapi = [ScreenBrightnessSetter]::LoadLibrary("dwmapi.dll") $procAddress = [ScreenBrightnessSetter]::GetProcAddress($hmodule_dwmapi, 171) $changeBrightness = ( [System.Runtime.InteropServices.Marshal]::GetDelegateForFunctionPointer( $procAddress, [ScreenBrightnessSetter+DwmpSDRToHDRBoostPtr]) ) $changeBrightness.Invoke($primaryMonitor, $brightness) --- SetBrightness.ahk #Requires AutoHotkey v2.0 SetBrightness(value) { cmd := 'powershell.exe -ExecutionPolicy Bypass -Command ' cmd .= '"& .\SetBrightness.ps1 ' . value . '"' RunWait cmd, A_ScriptDir, "Hide" } >^>+1::SetBrightness(1) >^>+2::SetBrightness(2) >^>+3::SetBrightness(3) >^>+4::SetBrightness(4) >^>+5::SetBrightness(5) >^>+6::SetBrightness(6)
<!-- language-all: sh --> * **With limited exceptions in PowerShell, on Windows there is _no_ support for *shell-level* globbing - _target commands themselves_ must perform resolution of wildcard patterns** to matching filenames; if they don't, globbing must be performed _manually, up front_, and the results passed as literal paths; see the bottom section for background information. * **PowerShell**: * Perhaps surprisingly, you _can_ invoke an _executable_ by wildcard pattern, as [zett42](https://stackoverflow.com/users/7571258/zett42) points out, though *that behavior is problematic* (see bottom section): # Surprisingly DOES find C:\Windows\System32\attrib.exe # and invokes it. C:\Windows\System3?\attr*.exe /? * Generally, you can discover commands, including external programs, via the [`Get-Command`](https://learn.microsoft.com/en-us/powershell/module/Microsoft.PowerShell.Core/Get-Command) cmdlet. * **Many file-processing cmdlets in PowerShell _do_ perform their own globbing (e.g., `Get-ChildItem`, `Remove-Item`); if you're calling commands that do not, notably external programs that don't, you must perform globbing _manually_, up front**, except on _Unix_-like platforms when calling _external programs_, where PowerShell _does_ perform automatic globbing (see bottom section): * Use [`Convert-Path`](https://learn.microsoft.com/en-us/powershell/module/Microsoft.PowerShell.Management/Convert-Path) to get the full, file-system-native paths of matching files or directories. * While [`Resolve-Path`](https://learn.microsoft.com/en-us/powershell/module/Microsoft.PowerShell.Management/Resolve-Path) may work too, it returns _objects_ whose `.ProviderPath` property you need to access to get the same information (_stringifying_ these objects, as happens implicitly when you pass them to external programs, yields their `.Path` property, which may be based on _PowerShell-only_ drives that external programs and .NET APIs know nothing about.) * For more control over what is matched, use [`Get-ChildItem`](https://learn.microsoft.com/en-us/powershell/module/Microsoft.PowerShell.Management/Get-ChildItem) and access the result objects' `.Name` or `.FullName` property, as needed; for instance, `Get-ChildItem` allows you to limit matching to _files_ (`-File`) or _directories_ (`-Directory`) only. * **PowerShell makes it easy to use the results of manually performed globbing _programmatically_**; the following example passes the full paths of all `*.txt` files in the current directory to the `cmd.exe`'s `echo` command as individual arguments; PowerShell automatically encloses paths with spaces in `"..."`, if needed: cmd /c echo (Get-ChildItem -Filter *.txt).FullName * Generally, note that PowerShell's [wildcard patterns](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_wildcards) are more powerful than those of the host platform's file-system APIs, and notably include support for character sets (e.g. `[ab]`) and ranges (e.g. `[0-9]`); another important difference is that `?` matches _exactly one_ character, whereas the native file-system APIs - both on Windows and Unix-like platforms - match _none or one_. * However, when using the `-Filter` parameter of file-processing cmdlets such as `Get-ChildItem`, it is the host platform's file-system APIs that are used, which - while limiting features (as noted) - *improves performance*. * **`cmd.exe` (Command Prompt, the legacy shell)**: * `cmd.exe` does _not_ support calling _executables_ by wildcard pattern; **_some_ of `cmd.exe`'s internal commands (e.g., `dir` and `del`) and _some_ standard external programs (e.g., `attrib.exe`) _do_ perform their own globbing; otherwise you must perform globbing _manually_, up front**: * `where.exe`, the external program for discovering external programs fundamentally only supports wildcard patterns in executable _names_ (e.g. `where find*.exe`), not in _paths_, which limits wildcard-based lookups to executables located in directories listed in the `PATH` environment variable. :: OK - "*" is part of a *name* only where.exe find*.exe :: !! FAILS: "*" or "?" must not be part of a *path* :: !! -> "ERROR: Invalid pattern is specified in "path:pattern"." where.exe C:\Windows\System32\find*.exe * Globbing via `dir` appears to be limited to wildcard characters in the _last_ path component: :: OK - "*" is only in the *last* path component. dir C:\Windows\System32\attri* :: !! FAILS: "*" or "?" must not occur in *non-terminal* components. :: !! -> "The filename, directory name, or volume label syntax is incorrect." dir C:\Windows\System3?\attri* * **Using manual globbing results _programmatically_ is quite *cumbersome* in `cmd.exe`** and requires use of `for` statements (whose wildcard matching has the same limitations as the `dir` command); for example, using the syntax for _batch files_ (`.cmd` or `.bat` files): * To use the resolved executable file path for invocation (assuming only _one_ file matches): @echo off setlocal :: Use a `for` loop over a wildcard pattern to enumerate :: the matching filenames - assumed to be just *one* in this case, :: namely attrib.exe, and save it in a variable. for %%f in (C:\Windows\System32\attr*.exe) do set "Exe=%%f" :: Execute the resolved filename to show its command-line help. "%Exe%" /? * To pass matching filenames as multiple arguments to a single command: @echo off setlocal enableDelayedExpansion :: Use a `for` loop over a wildcard pattern to enumerate :: matching filenames and collect them in a single variable. set files= for %%f in (*.txt) do set files=!files! "%%f" :: Pass all matching filenames to `echo` in this example. echo %files% --- #### Background information: * **On *Unix*-like platforms, POSIX-compatible shells such as Bash _themselves_ perform *globbing* (resolving filename wildcard patterns to matching filenames), _before_ the target command sees the resulting filenames**, as part of a feature set called [_shell expansions_](https://www.gnu.org/software/bash/manual/html_node/Shell-Expansions.html) (link is to the Bash manual). * **On *Windows*, `cmd.exe`** (the legacy shell also known as Command Prompt) **does NOT perform such expansions and PowerShell _mostly_ does NOT**. * That is, **it is generally _up to each target command_ to interpret wildcard patterns as such** and resolve them to matching filenames. * That said, in PowerShell, many built-in commands, known as _cmdlets_, _do_ support PowerShell's [wildcard patterns](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_wildcards), notably via the `-Path` parameter of [provider](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_providers) cmdlets, such as [`Get-ChildItem`](https://learn.microsoft.com/en-us/powershell/module/Microsoft.PowerShell.Management/Get-ChildItem). * Additionally and more generally, cmdlet parameters that represent _names_ often support wildcards too; e.g., `Get-Process exp*` lists all processes whose image name start with `exp`, such as `explorer`. * Note that the absence of Unix-style shell expansions on Windows also implies that _no_ semantic distinction is made between *unquoted* and *quoted* arguments (e.g., `*.txt` vs. `"*.txt"`): a target command generally sees *both* as _verbatim_ `*.txt`. * In **PowerShell**, **automatic globbing DOES occur** in these **limited cases**: * Perhaps surprisingly, **an _executable_ file path *can* be invoked via a wildcard pattern**: * as-is, if the pattern isn't enclosed in `'...'` or `"..."` and/or contains no variable references or expressions; e.g.: C:\Windows\System3?\attri?.exe * via `&`, the [call operator](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Operators#call-operator-), otherwise; e.g.: & $env:SystemRoot\System32\attri?.exe * However, **this feature is of questionable utility** - When would you _not_ want to know up front what *specific* executable you're invoking? - and it is unclear whether it was implemented _by design_, given that inappropriate wildcard processing surfaces in other contexts too - see [GitHub issue #4726](https://github.com/PowerShell/PowerShell/issues/4726). * Additionally, up to at least PowerShell 7.2.4, if _two or more_ executables match the wildcard pattern, a misleading error occurs, suggesting that _no_ matching executable was found - see [GitHub issue #17468](https://github.com/PowerShell/PowerShell/issues/17468); a variation of the problem also affects passing a wildcard-based _path_ (as opposed to a mere _name_) that matches multiple executables to [`Get-Command`](https://learn.microsoft.com/en-us/powershell/module/Microsoft.PowerShell.Core/Get-Command). * In POSIX-compatible shells, the multi-match scenario is handled differently, but is equally useless: the _first_ matching executable is invoked, and all others are passed _as its arguments_. * **On *Unix*-like platforms only, PowerShell _emulates_ the globbing feature of POSIX-compatible shells _when calling external programs_**, in an effort to behave more like the platform-native shells; if PowerShell didn't do that, something as simple as `ls *.txt` would fail, given that the external `/bin/ls` utility would then receive _verbatim_ `*.txt` as its argument. * However, this **emulation has limitations**, as of PowerShell 7.2.4: * The inability to use wildcard patterns that contain _spaces_ - see [GitHub issue #10683](https://github.com/PowerShell/PowerShell/issues/10683), and, conversely, the inability to individually `` ` ``-escape wildcard metacharacters in order to suppress globbing - see [GitHub issue #18038](https://github.com/PowerShell/PowerShell/issues/18038) * The inability to include _hidden_ files - see [GitHub issue #4683](https://github.com/PowerShell/PowerShell/issues/4683).
In my work I use both STATA and R. Currently I want to perform a dose-response meta-analysis and use the "dosresmeta" package in R. The following syntax is used: ``` DRMeta \<- dosresmeta( formula = logRR \~ dose, type = type, cases = n_cases, n = n, lb = RR_lo, ub = RR_hi, id = ID, data = DRMetaAnalysis ) ``` When executing this syntax, however, I encounter a problem. The error message appears: `Error in if (delta \< tol) break : missing value where TRUE/FALSE needed.` The reason for this error message is that I am missing some values for the variables "n_cases" and "n", which the authors have not provided. Interestingly, STATA does not require this information for the calculation of the dose-response meta-analysis. Is there a way to perform the analysis in R without requiring the values for "n_cases" and "n"? What can I do if I have not given these values? I have already asked the authors for the missing values, unfortunately without success. However, I need these studies for the dose-response meta-analysis, so it is not an option to exclude them. Best regards David
Is there a away to use services provided by Supabase in android studio with Java and not with Kotlin?
Supabase with Android Studio Java
|java|supabase|
I wanted to understand what is the behaviour of Google PubSub Lite when there is only 1 subscriber subscribed to a topic that has 2 partitions. Whether the subscriber receive the data from both the available partitions or it would only receive the data from only one partition and data would pile up in the other partition? I read through the [documentation](https://cloud.google.com/pubsub/lite/docs/topics) but could not find a clear answer.
Google PubSub Lite one subscriber with multiple partitions
|python|google-cloud-pubsub|publish-subscribe|
null
[![enter image description here][1]][1] I'm working on an ios project & I want to add a circular text label along with circular wheel using Swift language.The wheel can also rotate, and the labels with curved text will rotate to 360 degrees. I'm looking for any third party or any supporting file so that I can simulate the same behaviour.. Thanks in advance I want to integrate the above functionality with swift language For more reference please check the attached image: [1]: https://i.stack.imgur.com/NZFna.png
null
Are you calling the `signInWithEmailAndPassword` inside of a server action? It seems that when you are calling that function inside a server action or a server component it doesn't save the response on the local storage of the browser making the `onAuthStateChanged` malfunction as it's not getting any data from the local storage of the browser
null
I am a beginner at swift just starting, I am trying to code in swiftui a code that generates a random number and display it on two labels onto my screen. After I'd like to calculate the sum of those two randomly generated numbers. It gives an error message when I calculate the sum though. @IBOutlet weak var number1: UILabel! @IBOutlet weak var number2: UILabel! @IBOutlet weak var number3: UILabel! let randomInt1 = Int.random(in: 1..<10) let randomInt2 = Int.random(in: 1..<10) @IBAction func StartButton(_ sender: Any) { number1.text = String(randomInt1) number2.text = String(randomInt2) var answer1 = [Int]() answer1[1] = randomInt1 + randomInt2 note: I tried to use this code but it gives an error message when I try and play it
How do I calculate random integers than add those random integers together? (in SwiftUI)
|swift|swiftui|
null
I have a simple animation where I have a static rectangle slowly moving horizontally so that it pushes a circle across the 'floor'. How can I make the circle roll instead of slide? I tried adding friction to the circle and the floor, and removing friction from the pushing rectangle, but it makes no difference. var container = document.getElementById('container'); var w = 1000; var h = 500; const Engine = Matter.Engine; const Render = Matter.Render; const World = Matter.World; const Bodies = Matter.Bodies; const MouseConstraint = Matter.MouseConstraint; const Bounds = Matter.Bounds; const Events = Matter.Events; const Body = Matter.Body; const engine = Engine.create(); const render = Render.create({ element: container, engine: engine, options: { width: w, height: h, wireframes: true, showAngleIndicator: true, background: 'transparent', } }); var boxX = 10; var boxY = 390; const ball = Bodies.circle(100, 400, 80, {friction: 100 } ); const box = Bodies.rectangle( boxX , boxY, 20, 20, { friction: 0, isStatic: true } ); const floor = Bodies.rectangle( 500 , 480, 1000, 20, { isStatic: true } ); World.add(engine.world, [ ball, box, floor ]); engine.gravity.y = 3; Matter.Runner.run(engine); Render.run(render); move(); function move() { boxX += 1; Matter.Body.setPosition( box, {x:boxX, y:boxY}, [updateVelocity=true]); window.requestAnimationFrame( move ); };
I'm trying to split video by chapters by reading chapter details from text file, storing each values in a variable and passing those to ffmpeg. After splitting some chapters, start time and end time gets mixed with chapter title. Below is the script ``` #1 /bin/bash file=$(echo "$1" | awk -F '.' '{print $(NF-1)}') ext=$(echo "$1" | awk -F '.' '{print $(NF)}') ffprobe -show_chapters "$1" | grep -i 'id=\|start_time\|end_time\|title' > chapters.txt rm temp.txt while read chapter_id; do read start_time read end_time read chapter_title id=$(echo "$chapter_id" | awk -F '=' '{print $(NF)}') st=$(echo "$start_time" | awk -F '=' '{print $(NF)}') en=$(echo "$end_time" | awk -F '=' '{print $(NF)}') title=$(echo "$chapter_title" | awk -F '=' '{print $(NF)}') echo "id=$id" >> temp.txt echo "start=$id" >> temp.txt echo "end=$en" >> temp.txt echo "title=$title" >> temp.txt echo "------" >> temp.txt ffmpeg -ss "$st" -to "$en" -y -i "$1" -c copy "$file-$id-$title.$ext" done < chapters.txt ```
Bash Scripting: when trying to read 3 lines at a time, input line gets skipped in between
|linux|bash|shell|
null
I use a COUNTIF function to count and increment the number of instances of an ID in excel. | Id_Treatment_Count | ID | | ------------ | -------- | | 1 | 769334 | | 1 | 769345 | | 2 | 769345 | | 1 | 769376 | | 3 | 769345 | In Excel, I would get the above using `=COUNTIF($B$2:B2,B2)` and then I would drag down (Id_treatment_count being column A and ID being column B). How can I get the Id_treatment_Count in Power BI using DAX?
I want to change the class 'fa-meh-blank' to 'fa-smile-wink' but it doesn't work. When I flip the two classes, however, it works. I don't understand what's wrong, what would anyone have an idea? <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> const icone = document.querySelector("i"); console.log(icone); //Je soumets icone.addEventListener('click', function() { console.log('icΓ΄ne cliquΓ©'); icone.classList.toggle('happy'); icone.classList.toggle('fa-smile-wink'); }); <!-- language: lang-html --> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.5.1/css/all.min.css"> <div class="container"> <div class="bloc-central"> <h1>Projet Abonnez-vous</h1> <iframe width="560" height="315" src="https://www.youtube.com/embed/ggGT2N4l_08?si=hFk7Xzw16cEEzi6x" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> <div class="bloc-btn"> <i class="far fa-meh-blank"></i> <button class="btn">Abonnez-vous</button> </div> </div> </div> <!-- end snippet -->
|c++|c++11|gcc|atomic|memory-barriers|
I have extension where user can upload/edit/delete images in frontend. Everything works fine, but I am not able to delete images. I have following code, but seems not working: ``` $data = array(); $data['sys_file_reference'][123]['delete'] = 1; /** @var DataHandler $dataHandler */ $dataHandler = GeneralUtility::makeInstance('TYPO3\\CMS\\Core\\DataHandling\\DataHandler'); $dataHandler->start($data, array()); $dataHandler->process_datamap(); ``` What is correct datamapper for deleting fal images?
How do i define a function with multiple type in one input parameter?
|c#|unity-game-engine|
Simply replacing "@" with ".." fix this problem for me
[enter image description here][1] [1]: https://i.stack.imgur.com/7hUYd.png ok, finally solved it you need add comment in your custom php page template name and description without it, it will not display ~~~ <?php /** * Template Name: My Custom Page<<<< * Description: A Page Template with a darker design.<<<< * The main template file * This is the most generic template file in a WordPress theme * and one of the two required files for a theme (the other being style.css). * It is used to display a page when nothing more specific matches a query. * E.g., it puts together the home page when no home.php file exists * * Methods for TimberHelper can be found in the /lib sub-directory * * @package WordPress * @subpackage Timber * @since Timber 0.1 */ $context = Timber::context(); $context['posts'] = Timber::get_posts(); $context['foo'] = 'bar'; $templates = array( 'contact.twig' ); Timber::render( $templates, $context ); ~~~
Use `actionOverflowThreshold` parameter of `SnackBar` or `SnackBarThemeData`. `actionOverflowThreshold` description from docs: > The percentage threshold for action widget's width before it overflows > to a new line. > > Must be between 0 and 1. If the width of the snackbar's [content] is > greater than this percentage of the width of the snackbar less the > width of its [action], then the [action] will appear below the > [content]. > > At a value of 0, the action will not overflow to a new line. > > Defaults to 0.25. Set it to 0.5 or some value higher than the default.
Received the same notification from Apple: ITMS-91053: Missing API declaration - Your app’s code in the β€œ ” file references one or more APIs that require reasons, including the following API categories: NSPrivacyAccessedAPICategorySystemBootTime. While no action is required at this time... ITMS-91053: Missing API declaration - Your app’s code in the β€œ ” file references one or more APIs that require reasons, including the following API categories: NSPrivacyAccessedAPICategoryUserDefaults. While no action is required at this time... Added PrivacyInfo.xcprivacy file as in video https://developer.apple.com/videos/play/wwdc2023/10060 with following content: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>NSPrivacyAccessedAPITypes</key> <array> <dict> <key>NSPrivacyAccessedAPIType</key> <string>NSPrivacyAccessedAPICategoryUserDefaults</string> <key>NSPrivacyAccessedAPITypeReasons</key> <array> <string>CA92.1</string> </array> </dict> <dict> <key>NSPrivacyAccessedAPIType</key> <string>NSPrivacyAccessedAPICategorySystemBootTime</string> <key>NSPrivacyAccessedAPITypeReasons</key> <array> <string>35F9.1</string> </array> </dict> </array> </dict> </plist> **And don't forget to set Target Membership for this file to your main bundle.** Otherwise you will get the same result. After this I didn't get warning from Apple.
|php|controller|frontend|datamapper|typo3-9.x|
In my case isort is not sorting the way I'd like to to be sorted. Here's MWE: ``` import numpy as np import pandas as pd import seaborn as sea_bor_nnnn import matplotlib.pyplot as plt import torch import os ``` after saving the file I get the following output ``` import os # why is it inserting an empty line here? import numpy as np import torch import pandas as pd import seaborn as sea_bor_nnnn import matplotlib.pyplot as plt # here inserts only 1 blank line instead of 2 #actual code goes here dfdf dfdfd ddf ``` My preferred format would be the following ``` import os import torch import numpy as np import pandas as pd import seaborn as sea_bor_nnnn import matplotlib.pyplot as plt # actual code goes here ``` Notice how all imports are sorted according to length without empty lines in between and that actual code starts after 2 empty lines from the module imports. Any ideas how to achieve that in vscode? Maybe using using `isort` in combination with `autopep8` or `flake8` instead of `black`
I have some old code that was written to support work on a local network, but now may be run by remote workers. Individual file system accesses that were reasonably fast on the local network are intolerably slow for remote access because of all the additional overhead. For reading files, I have been able to improve speed greatly by reading the entire file into a string as a single access and replacing the StreamReader used to process it with StringReader instead. But searching the file system for possible file paths is a bit trickier. There is a particular directory (the "master") which contains many subdirectories. Some of these subdirectories (the "models") have a file (the "kompdef") which contains information pertinent to the model. There are many other directories in the master which do not have a kompdef file and thus are not models. I have a form with a drop-down that lists the models for the user's selection. Creating the list of models to populate the drop-down is done with the following code: ``` List<string> ModelNames = new List<string>(); foreach (DirectoryInfo Model in (new DirectoryInfo(MasterPath)).GetDirectories()) { if ((new FileInfo(Model.FullName + KompdefTail)).Exists) { ModelNames.Add(Model.Name); } } ``` However this requires many file system accesses, which means the form takes around 30 seconds to load up remotely, even though it takes less than 1 second on-site. Is there some way I can get a list of paths of the form: - *masterpath*\\*\\*kompdeftail* where *masterpath* and *kompdeftail* are known, with a minimal number of file system accesses?
I was wondering if there was some way to pass an argument to a file when running it. For example, I am running a php file in Linux called createcsv.php which uses two MySQL tables, an old one named products_feburary and a new one named products_march. It then generates csv files with data that is in the March catalog and not in the Feburary one. Right now, when the month changes I would change the MySQL tabels by simply editing the files and changing products_feburary -> products_march and products_march -> products_april. I am simply wondering if there was some way to pass the MySQL tables to the file directly something like createcsv.php("products_feburary", "products_march") which would allow me to pass the months as paremeters without needing to edit the files. Note that, the names are stored as strings so I only need to pass them as text. I have tried researching this, but haven't found anything on the internet that indicates that this is possible. I am wondering if there is a way to do this or a package you can install that allows you to do this.
Passing an argument to a file in linux
|linux|bash|
I need to get all of service status in Centreon using REST API. However, I can't retrieve the values at all. It only returns a 404 error. How can I resolve a problem like this with Centreon? Here, my code: ``` import requests import json def authentification(): # donnΓ©es authentification payload = '{"security": {"credentials": {"login": "XXXXXXXXXX", "password": "XXXXXXXX"}}}' headers = {'content-type': 'application/json'} #requΓͺte authentification r = requests.post('http://192.168.17.141/centreon/api/beta/login', data=payload, headers=headers) #on rΓ©cupΓ¨re le token json = r.json() #initialisation du header pour le token headers = dict() headers['X-AUTH-TOKEN'] = json['security']['token'] print(headers) return(headers) def get_service_status(authToken): #RequΓͺte headers = {'centreon-auth-token': auth_token} response = requests.get('http://192.168.17.141/centreon/api/monitoring/services&search={""}', headers=authToken) if response.status_code == 200: print("OK1") services = response.json() for service in services: print(f"Service: {service['description']}, Status: {service['state']}") else: print(f"Erreur: {response.status_code}") auth_token = authentification() get_service_status(auth_token) ```
Get service status of Centreon results in 404
i have two tables in my database ('customers' , 'store_owners'), the relation between them is many to many , well i wanna make a chat between the 'store_owner' and his customers , but the problem is when i made a chat table in my DB , i gave it Idsender , idReciever and message , the two ids are foreign key from tables i mention before , but the customer can be a sender and a reciever in the same discussion , and same for the store_owner , so what should id do ?? , any help NOTE : the ('customers' , 'store_owners') must be separated. im using React Native and laravel. i tried to make two foreign keys for the same id, but absolutely it wont work
creating a chat table in data base from 2 separated tables
|laravel|database|react-native|foreign-keys|chat|
null
I'm stuck in my CI/CD pipeline between Github Actions and AWS S3. I wrote a YAML script to deploy my website to S3, but when I push my website to the remote Github repository, it doesn't deploy to my S3 bucket. I'm getting an error message from Github Actions that says "The workflow is not valid." I've watched multiple tutorial videos, including this one at timestamp 10:12 https://www.youtube.com/watch?v=cwX3XAx3CGg&t=612s, but I'm still stuck. Can anyone help me troubleshoot this issue? I'm block in this step in the CI/CD pipeline between github actions ans S3. When I push my website to the remote Github repository it doesn't deploy the website to my S3 bucket. This is the yaml script that I wrote to deploy to S3 ``` on: push: branches: - main jobs: deploy: runs on: ubuntu-latest steps: - uses: actions/checkout@master - uses: hohel/git-s3-push@master with: args: --acl public-read --follow-symlinks --delete env: AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }} AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} AWS_REGION: 'us-east-1' #optional: defaults to us-east-1 SOURCE_DIR: 'website' # optional: defaults to entire repository ``` and here is the error I get. `The workflow is not valid. .github/workflows/deployToS3.yml (Line: 1, Col: 1): Unexpected value 'Name' .github/workflows/deployToS3.yml (Line: 10, Col: 5): Unexpected value 'runs on' ` [error message from Github Actions](https://i.stack.imgur.com/Qzvl1.png) and this video as well but I skipped all the workflow and went straight to the deployment script. https://www.youtube.com/watch?v=mFFXuXjVgkU&t=2s <iframe width="560" height="315" src="https://www.youtube.com/embed/cwX3XAx3CGg?si=ffjzjYzTDo4mTZ7J&amp;start=612" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> [![Screenshot of Github Actions error message][1]][1] [1]: https://i.stack.imgur.com/x3ZK0.png
I have found a solution. It seems that there is a bug in xtext Maven which is solved by adding the `src` directory to `additionalClasspathElements`. By doing this you do not need to define multiple `referencedResource` inside your `.mwe2` file anymore. ```xml <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> ... <configuration> <mainClass>org.eclipse.emf.mwe2.launch.runtime.Mwe2Launcher</mainClass> ... <additionalClasspathElements> <additionalClasspathElement>${project.basedir}/src</additionalClasspathElement> </additionalClasspathElements> </configuration> ``` Many many thanks for the contribuitor of the solution here: https://github.com/eclipse/xtext/issues/2394#issuecomment-1511031257 I accidentally found it by first defining multiple `referencedResource` for each language in the `.mwe2` file, but this lead to another PDA error which led me to the proper solution for both issues.
I would like to be able to pass a parameter through two functions, but defining the parameter that is use with a variable. This would avoid having to specify every possibility of what the parameter could be. See the following example for what I'd like to achieve: ``` run_multiple <- function (parameter_name, parameter_values) { for (value in parameter_values) { run_function(parameter_name = value) } } run_function <- function (x = "a", y = "b", z = "c") { print(sprintf("X is %s, Y is %s, Z is %s", x, y, z)) } ``` Then implementation would ideally work using either of these formats: ``` run_multiple("x", c("a", "b")) run_multiple(x, c("a", "b")) # [1] X is a, Y is a, Z is a # [1] X is b, Y is a, Z is a ``` I don't think `...` notation in `run_multiple` would help as far as I can tell, as it would only allow me to pass through the single value, not multiple values in the loop. If it's useful, the wider context of this is to be used alongside microbenchmarking for a large function, to see how enabling different versions of sub-functions affects the overall performance. Example of how this works: ``` run_benchmarking <- function (parameter_name, parameter_values, run_names) { # currently setup with only two values allowed microbenchmark::microbenchmark( run_names[1] = run_function(parameter_name = parameter_values[1]), run_names[2] = run_function(parameter_name = parameter_values[2]), times = 1000 ) } run_function <- function (use_x = T, use_y = T, use_z = T) { x <- func1(use_x) y <- func2(use_y) z <- func3(use_z) list(x = x, y = y, z = z) } # example of func n func1 <- function (use_x) { if (use_x) { x <- round(runif(1, 1, 100)) } else { x <- 1 } } # example of benchmarking tests, which would output results from microbenchmark run_benchmarking(use_x, c(T, F), c("With X", "Without X")) run_benchmarking(use_y, c(T, F), c("With Y", "Without Y")) run_benchmarking(use_z, c(T, F), c("With Z", "Without Z")) ``` This currently only accepts two values into a parameter which is something that could be expanded, but should be sufficient for A/B testing initially. I'm also open to other suggestions if there's a better way to test over a number of A/B tests like this.