instruction
stringlengths
0
30k
'ModuleNotFoundError' means that the python executable could not find the module. This implies that the module was not installed for that python executable. If you are using a virtual environment or a conda environment, you have to - make sure you install the package in that same environment - check for the package in the `site-packages` of that particular environment (`conda list scikit-learn`), and - make sure to have the environment activated when you do the install and run python. When using conda, and trying to debug this, it's also advisable to make sure that you don't have PYTHONHOME or PYTHONPATH set in your environment, since those could be confounding variables.
I am attempting to create an incremental counter in an ID column based on several conditions being met in 2 other columns and then "resetting" those conditions being met to ascertain the next increment of ID. I will provide a toy dataset. I have 3 columns: Location, Activity and ID. At the moment my ID column is empty but I have populated it here with values to illustrate my conditioning. I want to initialize ID from 1 and then I want to check whether D occurs. This is my first condition. I then need to check whether A occurs after D and at that instance, A should ALSO be at Location 2. Once this along with the D condition is met, I want to increment ID by 1 in the following row. Then in the next row, I want to "reset" the conditions which have occured and then again check by row whether D occurs and then at the first instance of A at location 2 occuring after D, I want to increment the next line by 1. This repeats itself to the very end of the dataset. ``` df <- data.frame( Location = c(2, 3, 3, 2, 1, 2, 2, 2, 1, 3, 3, 1, 2, 3, 2, 2, 1, 2, 3, 2, 1), Activity = c("A", "B", "C", "D", "D", "B", "A", "A", "B", "A", "C", "D", "A", "B", "B", "D", "A", "D", "D", "A", "C"), ID = c(1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 4) ) # Print the dataframe to view its structure print(df) Location Activity ID 1 2 A 1 2 3 B 1 3 3 C 1 4 2 D 1 5 1 D 1 6 2 B 1 7 2 A 1 8 2 A 2 9 1 B 2 10 3 A 2 11 3 C 2 12 1 D 2 13 2 A 2 14 3 B 3 15 2 B 3 16 2 D 3 17 1 A 3 18 2 D 3 19 3 D 3 20 2 A 3 21 1 C 4 ... ``` [![Toy data](https://i.stack.imgur.com/qhQY6.png)](https://i.stack.imgur.com/qhQY6.png) I have tried many iterations of some sort of conditional logic, but it appears to fail. My best attempt follows, but it does not match my expectations on ID column. ``` # Function to increment ID based on conditions increment_id_based_on_conditions <- function(df) { df$ID[1] <- 1 # Initialize the first ID # Initialize control variables waiting_for_a <- FALSE last_id <- 1 for (i in 1:nrow(df)) { if (waiting_for_a && df$Activity[i] == "A" && df$Location[i] == 2) { last_id <- last_id + 1 # Increment ID after conditions are met waiting_for_a <- FALSE # Reset condition } else if (df$Activity[i] == "D") { waiting_for_a <- TRUE # Set condition to start waiting for "A" at Location 2 } df$ID[i] <- last_id # Update ID column } df$ID <- c(df$ID[-1], NA) # Shift ID down by one row and make last ID NA return(df) } # Apply the function to dataset df_with_ids <- increment_id_based_on_conditions(df) # View the updated dataset print(df_with_ids) Location Activity ID 1 2 A 1 2 3 B 1 3 3 C 1 4 2 D 1 5 1 D 1 6 2 B 2 7 2 A 2 8 2 A 2 9 1 B 2 10 3 A 2 11 3 C 2 12 1 D 3 13 2 A 3 14 3 B 3 15 2 B 3 16 2 D 3 17 1 A 3 18 2 D 3 19 3 D 4 20 2 A 4 21 1 C NA ```
Python3.11 can't open file [Errno 2] No such file or directory
|python|docker|kubernetes|cron|yaml|
null
I would like to modify some HTML stuffs in a function called **pagination()** in this Wordpress core file: **wp-admin/includes/class-wp-list-table.php** I have been searching for relevant hook which can be used to update HTML in the above function, but I have not found one. Here are HTML (in bold) I would like to change: protected function pagination( $which ) { if ( empty( $this->_pagination_args ) ) { return; } $total_items = $this->_pagination_args['total_items']; $total_pages = $this->_pagination_args['total_pages']; $infinite_scroll = false; if ( isset( $this->_pagination_args['infinite_scroll'] ) ) { $infinite_scroll = $this->_pagination_args['infinite_scroll']; } if ( 'top' === $which && $total_pages > 1 ) { $this->screen->render_screen_reader_content( 'heading_pagination' ); } global $wp_query; $page = ( get_query_var('paged') ) ? get_query_var('paged') : 1; $ppp = get_query_var('posts_per_page'); $end = $ppp * $page; $start = $end - $ppp + 1; // $total = $wp_query->found_posts; // echo "Showing posts $start through $end of $total total."; **$output = '<span class="displaying-num">' . "Showing $start to $end of " . sprintf( /* translators: %s: Number of items. */ _n( '%s item', '%s items', $total_items ), number_format_i18n( $total_items ) ) . '</span>';** .......................................... ...........................................etc. Can anyone please advise what hook should be for the above purpose?
I have a basic express app. Providing token based auth. Dealing with zod to create a schemas to validate the incoming data. Saying I have two schemas: - createuserSchema {firstname, lastname, email, pass, passConfirm} - loginUserSchema {email, pass} Zod allows us to infer types based on our schemas like: `type SchemaBasedType = z.infer<typeof schmea>` Those i have two types: **CreateUserRequest** and **LoginUserRequest**, based on my schemas. First of all creating the validation middleware like this: ``` export const validateRequest = <T extends ZodTypeAny>(schema: T): RequestHandler => async (req, res, next) => { try { const userRequestData: Record<string, unknown> = req.body; const validationResult = (await schema.spa(userRequestData)) as z.infer<T>; if (!validationResult.success) { throw new BadRequest(fromZodError(validationResult.error).toString()); } req.payload = validationResult.data; next(); } catch (error: unknown) { next(error); } }; ``` As it mentioned above, this middleware acceps a schema argument, typed according to the zod docs. In my opinion, it's a good desicion to extend the request object with the "**payload**" property, where i can put my valid data. Then the problems begin. TS doesn't know what the payload actually is. That's where the declaration merging is comming. Firstly, it's tempting to try something like this: ``` declare global { namespace Express { export interface Request { payload?: any; } } } ``` But seems it's not a good idea, cause we know exactly what is our payload signature. Then I tried a union type, based on zod types: `payload?: CreateUserRequest | LoginUserRequest;` With this approach I caught a mistake, that some of the fields of more narrow type doesn't exist in other type; Then i tried to use a generic, ``` declare global { namespace Express { export interface Request<T> { payload?: T; } } } ``` and it seems like a solution, but the **Request** interface already has 5 generic arguments: ``` interface Request< P = ParamsDictionary, ResBody = any, ReqBody = any, ReqQuery = ParsedQs, LocalsObj extends Record<string, any> = Record<string, any> ``` and I can't even imagine how it suppose to merge, meaning my generic argument will be the first, or the last? Anyway it's seems as a wrong way, cause after all I don't see the extended interface via the IDE hints. Somewhere on stack I met this approach: ``` declare global { namespace Express { export interface Request< Payload = any, P = ParamsDictionary, ResBody = any, ReqBody = any, ReqQuery = ParsedQs, LocalsObj extends Record<string, any> = Record<string, any> > { payload?: Payload; } } } ``` And it even give me a hint after mouse guidence, but i'm not sure that any is a good type cause we already have types, infered by zod. If not specify Payload = any, I don't receive a hint in a type. I have no idead and stack with it, cause i'm not an expert in ts and backend architecture; Finally, i'd like to get something like this: `authRouter.post("/register", validateRequest(createUserSchema), AuthController.register);` where the compiler knows, that the payload signature is equal to CreateUserRequest `authRouter.post("/login", validateRequest(loginUserSchema), AuthController.login)`; where the compiler knows, that the payload signature is equal to LoginUserRequest Where should I properly specify my expected types and how to deal with them?
I think your default shell may not handle `&&` correctly. Try use: args: executable: /bin/bash in the shell module it might help. But if I were you, I would use multiline shell what solve the problem for sure: --- - name: Test hosts: localhost tasks: - name: Test touch shell: | touch a touch b
|python|sql|mysql|
A sheet of graphical inputs such as a PDF academic paper charted output is best edited in a graphical application such as MS Word so simply import the PDF as an input for 2 columns and graphics editing. [![enter image description here][1]][1] For simpler text use a command line approach. [![enter image description here][2]][2] For translations google online translate can convert the PDF body text but not the Mathematic Equations so you would need to mix and match with Office "Draw" graphics. [![enter image description here][3]][3] However whatever application you use expect to have to make extensive alterations to the maths so there were several adjustments need in just one small area to reverse the change from LaTeX to PDF. [![enter image description here][4]][4] [1]: https://i.stack.imgur.com/FxAZB.png [2]: https://i.stack.imgur.com/eZF9U.png [3]: https://i.stack.imgur.com/wG25A.png [4]: https://i.stack.imgur.com/UGBjY.png
You can send a bearer token with `fetch`: ``` fetch('your/url', { headers: {Authorization: 'Bearer ' + yourtoken} }) .then(data => { //some code }) ``` But this will not redirect. In your `then` you are aiming to redirect, but you have two main options here. Either you have a session id or something stored in the cookies, representing a valid session, in which case there should be no issues with the redirect, or you want to actually send the bearer token with a redirect, in which case you will need to send (and receive) this value as a POST parameter rather than a request header. I've been researching this for a JWT SSO work and we finally decided to use a POST parameter in order to pass a JWT with the redirect rather than passing it as a request header. EDIT Example of the `form` idea: ``` <form action="/index" method="post" style="display: none;"> <hidden name="token" value="YOUR TOKEN"> </form> ``` and you should submit this form when the login succeeded. Maybe you do a login with the first `fetch` call you have, receive the token, set the value of your token field in this second form and `submit()` the form object.
InnoDB - errno: 1005, sqlState: 'HY000' sqlMessage: "Can't create table (tableName) (errno: -1)",
I have a trouble. please help me. My config ```java @Configuration @EnableWebSecurity @RequiredArgsConstructor public class SecurityConfig { private final CustomUserDetailsService userDetailsService; @Order(0) @Bean SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception { http.authorizeHttpRequests(registry -> registry .requestMatchers("/mypage").hasRole("USER") .requestMatchers("/messages").hasRole("MANAGER") .requestMatchers("/config").hasRole("ADMIN") .requestMatchers("/").permitAll() .anyRequest().authenticated()); http.formLogin(Customizer.withDefaults()); http.logout(config -> config.logoutSuccessUrl("/")); http.userDetailsService(userDetailsService); return http.build(); } @Order(1) @Bean SecurityFilterChain resource(HttpSecurity http) throws Exception { http.authorizeHttpRequests(registry -> registry .requestMatchers(PathRequest.toStaticResources().atCommonLocations()).permitAll() .anyRequest().authenticated()); return http.build(); } ``` [This is an error page](https://i.stack.imgur.com/4aFJq.png) I found that redirectUrl is css path but don't know why. [This is a debugging screen](https://i.stack.imgur.com/w9b8f.png) (AbstractAuthenticationFilter-> successHandler.onAuthenticaionSucess and the successHandler instance is SavedRequestAwareAuthenticationSuccessHandler) When combining the two settings, it works fine. I heard that using `permitAll()` to static resource is alright because the latest spring security doesn't access a `HttpSession`. Is there a way that works well when using permitAll() and separating the settings of static resources? After logging in, I want to return to the page I was previously on. not css path.
Seems like the key here was setting `results='asis'` in the code chunk in which performs the loop. Then, the loop and the call to `imap` render the tables.
{"OriginalQuestionIds":[41265266],"Voters":[{"Id":1746118,"DisplayName":"Naman","BindingReason":{"GoldTagBadge":"java"}}]}
{"Voters":[{"Id":14868997,"DisplayName":"Charlieface"},{"Id":13268855,"DisplayName":"Peter Csala"},{"Id":2756409,"DisplayName":"TylerH"}]}
The proposed function parallelly calculates the sine and cosine of the argument in degrees with single precision. Remains operational for approximately |x|<2147483584 (this is the limit for the single precision format). The code is for Visual Studio C++, x86. The maximum error is about 2 ulps. In my system, using the function gives approximately a twofold increase in performance compared to standard sinf and cosf with almost the same accuracy. // s=sin(x), c=cos(x), x in degrees _declspec(naked) void _vectorcall SinCosD(float x, float &s, float &c) { static const float ct[8] = // Constants table { -1.0f/180.0f, -0.0f, 1.74532924E-2f, 90.0f, 1.34955597e-11f, 3.91486045e-22f, -8.86095734e-7f, -9.77247653e-17f }; _asm { mov eax,offset ct vmovups xmm1,[eax] vmovddup xmm4,[eax+16] vmulss xmm1,xmm1,xmm0 vmovddup xmm5,[eax+24] vcvtss2si eax,xmm1 vshufps xmm2,xmm1,xmm1,93 imul eax,180 jno sc_cont sub eax,eax vxorps xmm0,xmm0,xmm0 sc_cont: vcvtsi2ss xmm1,xmm1,eax vaddss xmm1,xmm1,xmm0 shl eax,29 vmovd xmm0,eax vorps xmm2,xmm2,xmm1 vmovlhps xmm0,xmm0,xmm0 vhsubps xmm2,xmm2,xmm1 vxorps xmm0,xmm0,xmm2 vmovsldup xmm2,xmm2 vmulps xmm2,xmm2,xmm2 vmovhlps xmm1,xmm1,xmm1 vfmadd231ps xmm5,xmm4,xmm2 vmulps xmm3,xmm2,xmm2 vmovshdup xmm4,xmm5 vfmadd231ps xmm5,xmm4,xmm3 vfmadd231ps xmm1,xmm5,xmm2 vmulps xmm0,xmm0,xmm1 vmovss [edx],xmm0 vextractps [ecx],xmm0,2 ret } }
{"Voters":[{"Id":573032,"DisplayName":"Roman C"},{"Id":1431720,"DisplayName":"Robert"},{"Id":839601,"DisplayName":"gnat"}]}
So I currently have a service for an eccomerce project which uses redis and jedis to connect. It is working when ran using source code but when it is dockerized it pops up the following error: `redis.clients.jedis.exceptions.JedisConnectionException: Failed to connect to any host resolved for DNS name.` When I ran redis docker container and my service's source code it works no problem Docker Compose: ``` version: '3' services: order-api: build: context: ./orderapi dockerfile: Dockerfile ports: - "8002:8002" depends_on: - redis environment: - MYSQL_HOST=host.docker.internal - MYSQL_PORT=3306 - MYSQL_DATABASE=ecommerce_fyp_order - MYSQL_USER=root - MYSQL_PASSWORD=12345 - REDIS_HOST=redis #options: host.docker.internal, redis, localhost - REDIS_PORT=6379 networks: - net redis: image: redis:latest #IPaddress is 172.18.0.2 ports: - "6379:6379" command: ["redis-server", "--bind", "redis", "--port", "6379"] networks: - net networks: net: driver: bridge ``` I am coding this in spring boot and the following is my application.properties of my service: ``` server.port=8002 spring.application.name=orderapi spring.datasource.url=jdbc:mysql://${MYSQL_HOST:localhost}:3306/ecommerce_fyp_order spring.datasource.username=root spring.datasource.password=12345 spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver spring.jpa.hibernate.ddl-auto=update spring.jpa.show-sql: true #Redis spring.session.redis.namespace=session spring.data.redis.host=localhost spring.data.redis.port=6379 ``` Redis Configuration: ``` @Configuration public class RedisConfig { @Bean JedisConnectionFactory jedisConnectionFactory() { return new JedisConnectionFactory(); } @Bean public RedisTemplate<String, Object> redisTemplate() { RedisTemplate<String, Object> template = new RedisTemplate<>(); template.setConnectionFactory(jedisConnectionFactory()); return template; } } ``` I have tried changing the REDIS_HOST to redis, host.docker.internal, redis, localhost but all do not work. Would appreciate any help to resolve this. Update: I tried to connect to the redis container using docker command `docker exec -it e-commercenew-redis-1 redis-cli` and I get the following error `Could not connect to Redis at 127.0.0.1:6379: Connection refused` It work when I run redis container and run the service using the spring boot source code but it doesn't work when run using docker compose so not too sure why the connection is being refused Any suggestions on how to solve this on docker thank you More Information on Docker Enviroment Json: ``` "Env": [ "REDIS_PORT=6379", "MYSQL_HOST=host.docker.internal", "MYSQL_PORT=3306", "MYSQL_DATABASE=ecommerce_fyp_order", "MYSQL_USER=root", "MYSQL_PASSWORD=12345", "REDIS_HOST=redis", "PATH=/usr/java/openjdk-17/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "JAVA_HOME=/usr/java/openjdk-17", "LANG=C.UTF-8", "JAVA_VERSION=17.0.2", "MAVEN_HOME=/usr/share/maven", "MAVEN_CONFIG=/root/.m2", "SPRING_DATA_REDIS_HOST=localhost", "SPRING_DATA_REDIS_PORT=6379" ]
Incremental counter based on several conditions previously being met
|conditional-statements|counter|incremental|
null
The way to make screen printing do what you want is to take complete control of it. That's what we're doing in this example: `put_char_con()` intercepts and interrogates every char to be written to the console, and then it prints it per our design. The example is a bit overkill, since it includes how to do tab stops of 4 (instead of standard 8). But I left it in because I thought since you're a programmer -- maybe you are also preferring a tab stop of 4, and this code provides that control as well (bonus! enjoy!). But please note that we definitely provide the newline behavior you requested: - Current cursor X is unchanged - Current cursor Y is advanced to next line If any `case` of the `switch()` statement isn't desired, you can comment out as you like. You'll find that the API `WriteFile()` does a very competent job of printing tab stops, backspaces... everything. So, you needn't intercept anything via `cases` except where you don't like its behavior... such as newline (`\n`). This code *is* portable to other platforms. Just provide suitable substitutes for the ms windows APIs it uses. They are: - `GetConsoleScreenBufferInfo()` -- used for finding current cursor position and screen size. - `WriteFile()` -- print one or more characters to the screen. Same behavior as `printf()`. - `SetConsoleCursorPosition()` - like it sounds. This code is compiled and tested by me today: #include <windows.h> #include <stdio.h> #define TAB_STOP 4 HANDLE hConOut; /*----------------------------------------------------------------------------- ** put_char_con() ** ** Writes a character to the console. ** Each char value passed in parameter is examined -- then we print it how we ** want it printed. *------------------------------------------------------------------------------*/ int put_char_con(int ch) { DWORD numBytesWritten = 0; int num_pad, cursor_offs; CONSOLE_SCREEN_BUFFER_INFO csbi; BOOL bSuccess; bSuccess = GetConsoleScreenBufferInfo(hConOut, &csbi); if(!bSuccess) { puts("GetConsoleScreenBufferInfoWr() failed in put_char_con()"); return FALSE; } switch(ch) { case '\n': csbi.dwCursorPosition.Y++; /* For stdio functions, a newline also implies a carriage return. If that's what * you want, enable this next line, else comment it out And cursor X coord stays put. */ //csbi.dwCursorPosition.X = 0; numBytesWritten++; break; /* For stdio functions, TAB white-space-pads everything from * the cursor to the tab stop. You can also comment out this case and * let WriteFile() handle it -- which behaves just as printf() does*/ case '\t': cursor_offs = csbi.dwCursorPosition.X; if(cursor_offs < 0) cursor_offs = 0; num_pad = TAB_STOP -((cursor_offs)%TAB_STOP); while(num_pad-- && (csbi.dwCursorPosition.X < csbi.srWindow.Right)) { char ch = ' '; WriteFile(hConOut, &ch, 1, &numBytesWritten, NULL); csbi.dwCursorPosition.X++; } GetConsoleScreenBufferInfo(hConOut, &csbi); break; default: /* Any other char */ WriteFile(hConOut, &ch, 1, &numBytesWritten, NULL); GetConsoleScreenBufferInfo(hConOut, &csbi); numBytesWritten++; break; } SetConsoleCursorPosition( hConOut, csbi.dwCursorPosition); if(numBytesWritten) return ch; return EOF; //Usually defined as (-1); } /*-------------------------------------------------------------------- ** cputs_con() ** ** Writes a string to the console. ** Uses: put_char_con() *--------------------------------------------------------------------*/ void cputs_con(const char *s) { /* Code updated per comment from Ted Lyngmo -- thx Ted!*/ while(*s) put_char_con(*s++); } int main() { if(!hConOut) { hConOut = GetStdHandle(STD_OUTPUT_HANDLE); if(hConOut == INVALID_HANDLE_VALUE) { printf("STDOUT not available"); return 0; } } puts("0 4 8 12 24 32 40 48 56 64 72 80"); puts("| | | | | | | | | | | | | | | | | | | | |"); cputs_con("This is a \ttab test. \tEach word \tfollowing a tab should be \ttab aligned...\n\r"); cputs_con("This\nis the newline test...If you want\r\ncarriage returns you must manually\nadd them to your strings.\n\r"); system("pause"); return 0; } Output: ```none 0 4 8 12 24 32 40 48 56 64 72 80 | | | | | | | | | | | | | | | | | | | | | This is a tab test. Each word following a tab should be tab aligned... This is the newline test...If you want carriage returns you must manually add them to your strings. ```
I'm currently developing a Flutter desktop application intended to serve as a third-party app for my school's intranet, allowing students to access its services. My objective is to enable my Flutter app to get all the necessary cookies when a user is redirected the intranet website's login page, completes a Cloudflare challenge, and enters their credentials. Specifically, I need to capture all accepted cookies, including the HTTP-only ones, and redirect them to the Flutter app for further subsequent requests. Chatgpt suggested the use of a forward-proxy. However, given my limited expertise in this area, I am looking for a confirmation to the solution and would be great to any recommendations or guidance of the general implementation description.
|cookies|proxy|reverse-proxy|forward-proxy|
While using @Alex's code, I encountered an issue when the culture info was set to "es" (Spanish). The decimal was printed with the format of "12,34", and the attribute wasn't working properly. I was able to fix this and make some other changes so that it works as closely to other components (such as using braces for parameters when using the ErrorMessage property" ```cs [AttributeUsage(AttributeTargets.Property | AttributeTargets.Field | AttributeTargets.Parameter, AllowMultiple = false)] public class PrecisionAndScaleAttribute : ValidationAttribute { private readonly int _precision; private readonly int _scale; public PrecisionAndScaleAttribute(int precision, int scale) : base(() => "The field {0} only allows decimals with precision {1} and scale {2}.") { _precision = precision; _scale = scale; } public override bool IsValid(object? value) { if (value is not decimal decimalValue) return false; string? precisionValue = decimalValue.ToString(CultureInfo.InvariantCulture); return precisionValue is null || Regex.IsMatch(precisionValue, $@"^(0|-?\d{{0,{_precision - _scale}}}(\.\d{{0,{_scale}}})?)$"); } /// <summary> /// Override of <see cref="ValidationAttribute.FormatErrorMessage"/> /// </summary> /// <param name="name">The user-visible name to include in the formatted message.</param> public override string FormatErrorMessage(string name) => string.Format(CultureInfo.CurrentCulture, ErrorMessageString, name, _precision, _scale); } ```
One way to do almost what you want it is to define the following function: ```js const makeItem = <T>(item: ComponentItem<T>) => item; ``` It does nothing more than returning its argument unchanged, but because it is a generic function you will get nice inference. Then when defining your items, simply wrap each item with that function: ```js <Component items={[ makeItem({ initialValue: items_1, // initialValue type is Item1. It's OK render: (item) => { // item type is any return ( <> <h3>{item.title}</h3> <p>{item.description}</p> </> ); }, }), makeItem({ initialValue: items_2, // initialValue type is Item2. It's OK render: (item) => { // item type is any return <p>{item.name}</p>; }, }) , ]} /> ``` You should now get the correct type for `item`, based on the type of `initialValue`, but the drawback is that you have to use this `makeItem` function.
<Spring Security6> After login(at "/"), css appears on the screen
|spring|spring-security|staticresource|
null
Thank you all for checking on this. I've found the answer and would like to share it with you for your benefit. We have to make sure to give the correct name for the firewall to be turned on in the compute instance. Make sure to use the target tags. It will help turn on the http and https traffic firewall rules.[enter image description here][1] resource "google_compute_firewall" "default-allow-http" { name = "default-allow-http" network = "default" allow { protocol = "tcp" ports = ["80"] } source_ranges = ["0.0.0.0/0"] target_tags = ["http-server"] } resource "google_compute_firewall" "default-allow-https" { name = "default-allow-https" network = "default" allow { protocol = "tcp" ports = ["443"] } source_ranges = ["0.0.0.0/0"] target_tags = ["https-server"] } [1]: https://i.stack.imgur.com/u3EUq.png
Since you have explicitly mentioned decrease clause, Dafny will use that decrease clause. You assumption that it will compare `x + y` with tuple `x`, `y` is wrong. It would have chosen tuple `x`, `y` if you haven't provided decrease clause. Now take case when it is called with `x = 3` and `y = 2`. Here `x + y` is 5 and when you call recursively in last else if branch it will be `x = 2` and `y = 3` but `x + y` is 5 still. It is not decreasing hence Dafny is complaining.
I have table which represents sequence of points, I need to get sum by all possible combinations. The main problem is how to do it with minimum actions because the Real table is huge |Col1|col2|col3|col4|col5|col6|ct| |:---|:---|:---|:---|:---|:---|:-| |Id1 |id2 |id3 |id4 |id5 |id6 |30| |Id8 |id3 |id5 |id2 |id4 |id6 |45| The expected result is |p1 |p2 |ct| |---|---|--| |Id3|id5|75| |Id3|id4|75| |Id3|id6|75| |Id5|id6|75| |Id2|id4|75| |Id2|id6|75| |Id4|id6|75| I would be grateful for any help
The RecordIO format is designed to pack a large number of images into a single file, so I don't think it would work well for predicting single images. When it comes to prediction, you definitely don't have to copy images to a notebook instance or to S3. You just have to load them from anywhere and inline them in your prediction requests. **If you want HTTP-based prediction, here are your options:** 1) Use the SageMaker SDK Predictor.predict() API on any machine (as long as it has proper AWS credentials) https://github.com/aws/sagemaker-python-sdk 2) Use the AWS Python SDK (aka boto3) API invoke_endpoint() on any machine (as long as it has proper AWS credentials) You can even build a simple service to perform pre-processing or post-processing with Lambda. Here's an example: https://medium.com/@julsimon/using-chalice-to-serve-sagemaker-predictions-a2015c02b033 **If you want batch prediction:** the simplest way is to retrieve the trained model from SageMaker, write a few lines of ad-hoc MXNet code to load it and run all your predictions. Here's an example: https://mxnet.incubator.apache.org/tutorials/python/predict_image.html
A list of data frames: my_list <- list(structure(list(id = c("xxxyz", "xxxyz", "zzuio", "iiopz"), country = c("USA", "USA", "Canada", "Switzerland")), class = "data.frame", row.names = c(NA, -4L)), structure(list(id = c("xxxyz", "ppuip", "zzuio"), country = c("USA", "Canada", "Canada")), class = "data.frame", row.names = c(NA, -3L))) my_list [[1]] id country 1 xxxyz USA 2 xxxyz USA 3 zzuio Canada 4 iiopz Switzerland [[2]] id country 1 xxxyz USA 2 ppuip Canada 3 zzuio Canada I want to remove duplicated rows both within and between the data frames stored in that list. [This][1] works to remove duplicates within each data frame: [[1]] id country 1 xxxyz USA 3 zzuio Canada 4 iiopz Switzerland [[2]] id country 1 xxxyz USA 2 ppuip Canada 3 zzuio Canada But there are still duplicates between data frames. I want to remove them all, with the following desired output: [[1]] id country iiopz Switzerland [[2]] id country xxxyz USA zzuio Canada ppuip Canada Notes: 1. I want to eliminate duplicates on `id` (other variables can be duplicated) 2. I need a solution where it is not needed to merge the data frames before checking for duplicates 3. If possible, I wish to retain the last observation. For example, in the desired output above, "zzuio Canada" existed in both df, but was kept in the last df only, that is, df 2. 4. I have more than 100 dfs, with variable names that don't necessarily match between dfs. That said, the id is always called "id" 5. I need to reassign the result to the same object (in the case above, `my_list`) [1]: https://stackoverflow.com/questions/42163966/remove-duplicate-rows-for-multiple-dataframes
Use functional components with hooks for managing state in your React projects. They're easier to read and write, making your code simpler and more maintainable. Hooks also allow you to reuse code more easily and are better for performance. Plus, they're the modern way of working with React, so you'll be learning and using the most up-to-date practices.
[See Screenshot demonstrating the use of Excel's Solver][1] [enter image description here](https://i.stack.imgur.com/eK3C7.png) I have a task to automate a certain excel worksheet. The worksheet happens to implement a logic with an excel plugin called Solver. It uses a single value(-1.95624) in Cell $O$9 (which is the result of computations highlighted with red and blue ink in the diagram ) as an input value and then returns three values for C, B1 and B2 using an algorithm called "GRG Non linear regression". My task is to emulate this logic in Python. Below is my attempt. The major problem, is I am not getting the same values for C, B1 and B2 as computed by Excel's Solver plugin. ``` import numpy, scipy, matplotlib import pandas as pd import matplotlib.pyplot as plt from scipy.optimize import curve_fit from scipy.optimize import differential_evolution import warnings xData = numpy.array([-2.59772914040242,-2.28665528866907,-2.29176070881848,-2.31163972446061,-2.28369414349715,-2.27911303233721,-2.28222332344644,-2.39089535619106,-2.32144325648778,-2.17235002006179,-2.22906032068685,-2.42044014499938,-2.71639505549322,-2.65462061336346,-2.47330475191616,-2.33132910807216,-2.33025978869114,-2.61175064230516,-2.92916553244925,-2.987503044973,-3.00367414706232,-1.45507812104723]) # Use the same table name as the parameter yData = numpy.array([0.0692847120775066,0.0922342111029099,0.0918076382491768,0.0901635409944003,0.0924824386284127,0.092867647175396,0.092605957740688,20.0838696111204451,0.0893625419994501,0.102261091024881,0.097171046758256,70.0816272542472914,0.0620128251290935,0.0657047909578125,0.0777509345715382,0.088561321341585,0.088647672874835,90.0683859871424735,0.0507304952495273,0.0479936476914665,0.0472601632188253,0.18922126828463]) # Use the same table name as the parameter def func(x, a, b, Offset): # Sigmoid A With Offset from zunzun.com return 1.0 / (1.0 + numpy.exp(-a * (x-b))) + Offset # function for genetic algorithm to minimize (sum of squared error) def sumOfSquaredError(parameterTuple): warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm val = func(xData, *parameterTuple) return numpy.sum((yData - val) ** 2.0) def generate_Initial_Parameters(): # min and max used for bounds maxX = max(xData) minX = min(xData) maxY = max(yData) minY = min(yData) parameterBounds = [] parameterBounds.append([minX, maxX]) # search bounds for a parameterBounds.append([minX, maxX]) # search bounds for b parameterBounds.append([0.0, maxY]) # search bounds for Offset # "seed" the numpy random number generator for repeatable results result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3) return result.x # generate initial parameter values geneticParameters = generate_Initial_Parameters() # curve fit the test data params, covariance = curve_fit(func, xData, yData, geneticParameters,maxfev=50000) # Convert parameters to Python built-in types params = [float(param) for param in params] # Convert numpy float64 to Python float C, B1, B2 = params OutputDataSet = pd.DataFrame({"C": [C], "B1": [B1], "B2": [B2],"ProType":[input_value_1],"RegType":[input_value_2]}) ``` Any Ideas will be helpful? Thanks in advance Here's My Attempt: Given these datasets for xData and yData, the correct output should be: C= -2.35443383, B1 = -14.70820051, B2 = 0.0056217 [1]: https://i.stack.imgur.com/PMnkq.png
null
|python|linux|virtual-machine|
{"OriginalQuestionIds":[9922756],"Voters":[{"Id":185647,"DisplayName":"Squirrel","BindingReason":{"GoldTagBadge":"sql-server"}}]}
I'm currently developing an Android application and I'm trying to introduce MSAL with my current Keycloak / Azure authentication flow. However, my backend is set up to validate by Keycloak tokens and I need to maintain this setup. Here's the flow I'm trying to achieve: - Authenticate users with Keycloak + Azure AD using MSAL. - After successful Azure AD authentication, redirect to Keycloak. - Keycloak redirect to APP. I'm looking for guidance on how to implement this flow. Specifically, I'm not sure how to configure Keycloak as IDP from MSAL / Authenticator. If this is not possible, are there any workarounds or alternative approaches to achieve this? [Sequence diagram of my current flow](https://i.stack.imgur.com/eHdvE.png) [Sequence diagram of wanted flow](https://i.stack.imgur.com/Wh3WB.png) Any help would be greatly appreciated. I have tried to configure MSAL to use Keycloak but couldn't find any way to do that. I have also looked into B2C which should support other authentication mechanisms but it seemed to be impossible in MSAL for Android, was present in MSAL js. The authentication flow with keycloak and Azure configured as an identity provider is working perfect, but just need to add MSAL into the picture to achieve SSO. As keycloak will redirect to Azure at login we should be able to reuse the same Azure session if it already has been established by another APP/Browser on the Android mobile phone.
Keycloak configured with Azure as IDP and MSAL on Android
|android|azure|keycloak|openid-connect|msal|
null
I set up mailtrain to send emails using AWS SES. However, the emails bounce with the following reason: " The security token included in the request is invalid." ![Image 3](https://i.stack.imgur.com/QRJWn.png) Additionally, emails get sent successfully sometimes, but majority of time I get the error.
You need to either generate ScalaPB classes for Google common APIs or use pre generated ones. See https://scalapb.github.io/docs/common-protos/ and https://scalapb.github.io/docs/third-party-protos#there-is-a-library-on-maven-with-the-protos-and-possibly-generated-java-code
I am trying to get a virtual machine to send a string/list to a second virtual machine and then have the second virtual machine return a string to the first virtual machine. To do this, I have decided to go with a python socket server and client script. The virtual machines are both hosted via Oracle. Here is the code Server: import socket SERVER_IP = "localhost" SERVER_PORT = 8000 server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server_socket.bind((SERVER_IP, SERVER_PORT)) server_socket.listen(1) print("Server listening on", SERVER_IP, "port", SERVER_PORT) client_socket, client_address = server_socket.accept() print("Connection from:", client_address) data = client_socket.recv(1024) print("Received data:", data.decode()) response = "Hello from the server!" client_socket.send(response.encode()) client_socket.close() server_socket.close() Client: import socket SERVER_IP = "(Server Public IP Address)" SERVER_PORT = 8000 client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client_socket.connect((SERVER_IP, SERVER_PORT)) data = "Hello from the client!" client_socket.send(data.encode()) response = client_socket.recv(1024) print("Response from server:", response.decode()) client_socket.close() I have opened up the port 8000 (tcp) with this tutorial: https://www.youtube.com/watch?v=6Vgzvh2jqRQ I have tried making the "localhost" with the actual public ip address OSError: [Errno 99] Cannot assign requested address I have made both machines ping each other successfully. using ping (public ip) Additionally, I tried to allow all protocols, but then when i run the code, OSError: [Errno 113] No route to host When running the code, the code does not show any indication of connection. (with specifying src:8000 dst:8000 ip protcol" tcp) I have no idea what to do
I ask my first question on stack because I despair :( Sorry if my english isn't good ! I develop a simple Quiz app with adonisjs to learn the framework and application deployment. My problem is during the deployment. In developpement, I had no problem. Everthing works wtih "pnpm run dev" ! Now, I already clone the project on my vps with git in /var/www/myproject, run the 'pnpm run build' and configure the nginx "correctly". I run the command 'node bin/server.js' (I created a service to run the command when the VPS starts.) The nginx configuration : ``` server { #root /var/www/projectName; server_name mydomain.fr; location / { proxy_pass http://127.0.0.1:3333; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } listen [::]:443 ssl ipv6only=on; # managed by Certbot listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/qfelbinger.fr/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/qfelbinger.fr/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } ``` And now when I go to my domain with my browser, I have a problem with Vite I think (I have no experience with this tools): The error message : `ENOENT: no such file or directory, open '/public/assets/.vite/manifest.json'` [enter image description here](https://i.stack.imgur.com/eFHwm.png) So, I go in the folder and I SEE the file with same right of all file in le project. My vite configuration : ``` import { defineConfig } from 'vite'; import adonisjs from '@adonisjs/vite/client'; export default defineConfig({ plugins: [ adonisjs({ entrypoints: ['resources/css/app.css', 'resources/js/app.js'], reload: ['resources/views/**/*.edge'], }), ], }); ``` I use Edge template for the frontend. If anyone have an idea, I'm here :) And I can give any information if you need it ! I try to install all project again with no changes. I execute "pnpm i" to install dev dependency but I know it's usless, so, nothing change... And I don't see anything on Vite documentation since few days...
Adonis.js in production : ENOENT: no such file or directory, open '/public/assets/.vite/manifest.json'
|javascript|nginx|adonis.js|
null
As soon as changes are detected, you could disable scrolling and use a `DragGesture` to detect a swipe. Then you can prompt for confirmation. Something like this: ```swift struct ContentView: View { struct FormValues: Equatable { var aFlag = false var text = "" } @State private var isSheetShowing = false @State private var isAlertshowing = false @State private var formValues = FormValues() @State private var hasChanges = false @State private var dragOffset = CGFloat.zero var body: some View { Button("Show sheet") { isSheetShowing = true } .buttonStyle(.bordered) .sheet(isPresented: $isSheetShowing) { Form { Picker("On or off", selection: $formValues.aFlag) { Text("Off").tag(false) Text("On").tag(true) } .pickerStyle(.segmented) TextField(text: $formValues.text) { Text("Text") } } .offset(y: dragOffset) .animation(.easeInOut, value: dragOffset) .interactiveDismissDisabled(hasChanges) .scrollDisabled(hasChanges) .onChange(of: formValues) { oldVal, newVal in hasChanges = true } .gesture( DragGesture() .onChanged { val in if hasChanges { dragOffset = val.translation.height } } .onEnded { val in dragOffset = 0 if hasChanges { isAlertshowing = true } } ) .confirmationDialog("Are you sure?", isPresented: $isAlertshowing) { Button("Yes") { isSheetShowing = false // Save changes } Button("No", role: .cancel) { // Do nothing } } message: { Text("Are you sure?") } } } } ``` This works, but there are two issues I couldn't resolve: - after an alert choice is selected, the alert disappears but then re-appears, before disappearing a second time - the drag gesture may interfere with some of the form content, for example, with a `Toggle` switch. Also, if the user needs to be able to scroll to reach the end of the form then of course this may be impacted. You may find that the first issue (with the re-appearing alert) happens with other solutions too.
It seems the [enumerate][1] will be the shortest way to do this, there is also a [concat][2], but it is not yet in the standard. #include <ranges> #include <vector> #include <iostream> #include <range/v3/view/concat.hpp> int main() { std::vector<int> values = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; // concat is only in range-v3 library const auto f = values | std::views::take(5); const auto s = values | std::views::drop(6); for (auto v : ranges::views::concat(f, s)) { std::cout << v << std::endl; } // that is supported on C++23 for (auto [idx, v] : values | std::views::enumerate) { if ( idx != 5 ) std::cout << v << std::endl; } } https://godbolt.org/z/b9qcsGs87 [1]: https://en.cppreference.com/w/cpp/ranges/enumerate_view [2]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2022/p2542r0.html
I want to know if anyone disagrees with this answer, and if better information is available anywhere. This answer was composed based on research on the subject without relevant expertise. For that reason, I have less than full confidence that it is fully correct. **Short answer:** The value of the "Video compression" property for a file is a GUID that specifies the video coding format used in the file. The format specifies what compression techniques may be used by a codec implementing that format. In this round-about way, this property specifies what techniques may be used to compress the video data in the file. **Long answer:** The meaning of the Windows shell file property "Video compression" (property index 311 in Windows 11) is cryptic. The values of the property are GUIDs ([globally unique identifiers][1]) that represent "media types," which are associated with [video coding formats][2] and [video codecs][3]. The format or codec used in a video file indicates what compression techniques are used in the file. There are two references online that indicate the meanings of some GUIDs that appear as values of this property. These references were given in comments on the question. - *Media Type Identifiers for the Windows Media Format SDK* - https://learn.microsoft.com/en-us/windows/win32/wmformat/media-type-identifiers - Link given in comment by "user23404432". - Provides a list of 52 GUIDs and associates each with a "global media type identifier". - There are values of "Video compression" that do not appear in the list. - This page is on an official Microsoft website, but it states that the information is provided for a legacy feature that has been superseded. There is nothing about where to find more up-to-date associations of GUIDs and media types. - This is "Ref. 1" in the table below. - *Media Foundation and DirectShow Media Types* - https://gix.github.io/media-types/ - Link given in comment by Paul. - Provides a list of hundreds of GUIDs and associates each with a video, audio, subtitle. or pixel format. - There are values of "Video compression" that do not appear in the list. - The [root URL of the page][4] gives a 404 error, so there is no information about who posted this page. The page contains no reference to any source of its data, nor any as-of date of the data. Thus there is no way to assess the validity of the data. - This is "Ref. 2" in the table below. Among the 71 files discussed in the question, the above references indicate the following associations: |#|Video compression value|Found in|Global media type identifier (Ref. 1)|Video format (Ref. 2) . . . . . . . . . . . . . . . . . . . . . . . . .| |--|--|--|--|--| |1|{3147504D-0000-0010-8000-00AA00389B71}|mpg||MPEG-1 Part 2, Video ([ISO/IEC 11172-2][5])| |2|{31564D57-0000-0010-8000-00AA00389B71}|wmv|WMMEDIASUBTYPE_WMV1|Windows Media Video| |3|{3253344D-0000-0010-8000-00AA00389B71}|avi, mp4|WMMEDIASUBTYPE_M4S2|MPEG-4 Part 2, MPEG-4 Visual ([ISO/IEC 14496-2][6])| |4|{32564D57-0000-0010-8000-00AA00389B71}|wmv|WMMEDIASUBTYPE_WMV2|Windows Media Video| |5|{33564D57-0000-0010-8000-00AA00389B71}|wmv|WMMEDIASUBTYPE_WMV3|Windows Media Video| |6|{34363248-0000-0010-8000-00AA00389B71}|mp4||ITU-T H.264/MPEG-4 Part 10, AVC ([ISO/IEC 14496-10][7])| |7|{53565133-767A-494D-B478-F29D25DC9037}|mov||| |8|{5634504D-0000-0010-8000-00AA00389B71}|avi||MPEG-4 Part 2, MPEG-4 Visual ([ISO/IEC 14496-2][6])| |9|{E06D8026-DB46-11CF-B4D1-00805F6CBBEA}|mpg|WMMEDIASUBTYPE_MPEG2_VIDEO|ITU-T H.262/MPEG-2 Part 2, Video ([ISO/IEC 13818-2][8])| In this table, the links on the names of the ISO specifications are not provided in the reference. The ISO webpages were found by Googling those spec names. The four ISO pages linked to provide brief descriptions of the specs and their histories and allow for the purchase of the spec documents, which have, respectively, 112, 706, 867, and 225 pages, for $245 each. In addition, one of the specs is available for free download, the one for "H.264/MPEG-4 Part 10, AVC" used in most (but not all) mp4 files. The download is an 880-page pdf. Neither any of the four ISO webpages nor the downloaded document contain the word "codec," but they all talk about coding of moving pictures. Section 0.6.1 of the document says, "The coded representation specified in the syntax [mentioned in Sec. 0.5] is **designed to enable a high compression capability** for a desired image quality. ... A number of techniques may be used to achieve highly efficient compression. Encoding algorithms (not specified in this document) may ..." (Emphasis added.) From this I infer that the specs referred to define the syntax to be used by encoding algorithms intended to adhere to those specs, and that those algorithms are what are called "codecs." The defined syntax specifies exactly what compression techniques may be used by the codecs. If I've understood all of this correctly, this is the round-about way in which those cryptic GUIDs specify what types of compression may be used in the files, which apparently does not necessarily mean that all allowed techniques are actually used in the file. I hope people who know this stuff better than me will say in comments whether this answer is correct, or will post a better answer or edit this one to fix any errors. One big problem with this answer is that I haven't found any other reference stating the meanings of those GUIDs other than the two stated above. There has to be a complete and up-to-date list somewhere. I hope someone can provide a link to that. [1]: https://en.wikipedia.org/wiki/Globally_unique_identifier [2]: https://en.wikipedia.org/wiki/Video_coding_format [3]: https://en.wikipedia.org/wiki/Video_codec [4]: https://gix.github.io/ [5]: https://www.iso.org/standard/22411.html [6]: https://www.iso.org/standard/39259.html [7]: https://www.iso.org/standard/83529.html [8]: https://www.iso.org/standard/61152.html
I am trying my hand on EDGE AI and specifically in using YOLO models on Jetson Orin Nano (8GB). Specifically, I am doing some preliminary research on how many video streams I can handle simultaneously (doing object detection) on Orin Nano board. The claimed performance is 40TOPS (for calculations with INT8 precision). The first idea was to see the computational complexity required to run YOLOv8 models, and from the official documentation I got this: - YOLOv8n = 10.5 GFLOPs | 3.5 params (M) - YOLOv8s = 29.7 GFLOPs | 11.4 params (M) - YOLOv8m = 80.6 GFLOPs | 26.2 params (M) - YOLOv8l = 167.4 GFLOPs | 44.1 params (M) - YOLOv8x = 260.6 GFLOPs | 68.7 params (M) The fact is, if I understand correctly, these models use FP32 precision and therefore I cannot make a direct calculation with Orin Nano performance since it is expressed in INT8. I then looked to see if there were any versions of YOLO with INT8 weights and found YOLO-NAS, which seems to have comparable performance to the standard YOLO models, but with fewer resource demands. The problem is that the official documentation does not give data on the TOPS (or GOPS) required for the models, only the number of parameters: - YOLO-NAS S = 19.0 params (M) - YOLO-NAS M = 51.1 params (M) - YOLO-NAS L = 66.9 params (M) I then tried the following code to obtain the number of TOPS needed, but it comes out with a value that I do not think is congruous: "\[...\] Total mult-adds (T): 1.04" ``` from torchinfo import summary summary(model=yolo_nas_l, input_size=(16, 3, 640, 640), col_names=["input_size", "output_size", "num_params", "trainable"], col_width=20, row_settings=["var_names"] ) ``` Indeed, wanting to make a rough estimate based on the number of parameters, the YOLO-NAS L model would be similar to YOLOv8x, but it has only 260.6 GFLOPs and not 1.04 (T). Is there any error in the calculation of the measurements? Also related to the fact that since it is INT8, we should no longer talk about FLOPS (floating point operations) but OPS... is this correct? Would anyone be able to give me some tips on where (or how) to find the complexity of YOLO-NAS models, expressed in OPS? And so that it is usable to make a calculation of the possible resources available on the Orin Nano (which I remember being expressed in TOPS). Thanks a lot!
How to find the number of TOPS needed for YOLO-NAS model
You can use an analytic function and conditional aggregation to count the number of invalid div's and exclude those: ```lang-sql SELECT s.nbr, s.model FROM process p INNER JOIN ( SELECT * FROM ( SELECT s.*, COUNT( CASE WHEN s.div IN ('AR91', 'AR10', 'AG55', 'AZ56', 'CZ12') THEN 1 END ) AS num_div FROM service s ) WHERE num_div = 0 ) on p.nbr = s.nbr WHERE p.unit = 'MC' AND p.car = 'M' AND p.bank = '1' AND p.paid in ('NY', 'NJ') AND s.paid = 'NY' AND TO_DATE(s.ymd DEFAULT NULL ON CONVERSION ERROR, 'YYYYMMDD') BETWEEN DATE '2024-03-03' AND DATE '2024-03-11' ```
From versioning [docs][1]: > Baselines define a global version floor for what versions will be considered. It's really nothing more than a git commit hash in a git registry (builtin registry aka vcpkg git repository is itself a git registry). Vcpkg uses it to extract version information for a port from `baseline.json`. For example, if we wanted to install `qtbase`: - with baseline `4bee3f5aae7aefbc129ca81c33d6a062b02fcf3b` vcpkg would look at [baseline.json@4bee3f5aae7aefbc129ca81c33d6a062b02fcf3b][2] and find `qtbase@6.6.1#10`. - with baseline `11afcc7e8b7118f3a0ae97140ce8037b7f53ad64` vcpkg would look at [baseline.json@11afcc7e8b7118f3a0ae97140ce8037b7f53ad64][3] and find `qtbase@6.3.2#3`. [1]: https://learn.microsoft.com/en-us/vcpkg/users/versioning#baselines [2]: https://github.com/microsoft/vcpkg/tree/4bee3f5aae7aefbc129ca81c33d6a062b02fcf3b/versions/baseline.json [3]: https://github.com/microsoft/vcpkg/blob/11afcc7e8b7118f3a0ae97140ce8037b7f53ad64/versions/baseline.json
I have form button "but1" on one page, it has some function but i need to activate this button not personally, but automatically when page is open (and if it possible, activate repeatedly 10 times with 5s delay, or looping) i tried something like this: ``` function activateButton() { var button = this.getField("But1") button.buttonSetState(1) } setTimeout(activateButton, 5000) ``` but this not work.
Activate form button when open page with it
|forms|button|scripting|
null
I think for completeness, one should mention the most compact way to do it: ```python df.groupby(['Name', df.Date.round('7d')]) ``` Whis has less controls, but works across years. (assumiing data is a `datetime` column. If not, replace `df.Date` with `pd.to_datetime(df.Date)`)
Yes, it can be done without using the `class` syntax. The trick is to use [`Reflect.construct`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Reflect/construct) to substitute the call to `super`. Here's a small example snippet, containing a factory (`createCustomElementFactory`) to create the 'class' you need for registering a custom element using `customElements.define`. To play with a (way more evolved) module for this, see/fork this [Stackblitz project](https://stackblitz.com/edit/js-kasjrz?file=Src%2FWebComponentFactory.js). <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> const myCustomElementCreator = createCustomElementFactory(); const connectedCallback = function() { const isCustom = this.getAttribute(`is`); if (!isCustom) { this.style.display = `block`; this.style.padding = `2px 5px`; this.style.border = `1px solid #999`; } this.insertAdjacentHTML(`afterbegin`, `<div>I am <i>&lt;${ this.tagName.toLowerCase()}${ isCustom ? ` is="${isCustom}"` : ``}></i></div>`); console.log(`We just connected the ${ isCustom ? `custom` : `autonomous`} element '${ this.tagName.toLowerCase()}${isCustom ? `[is="${isCustom}"]` : ``}'`); }; const myBrandNewElemClass = myCustomElementCreator({ connectedCallback } ); const myBrandNewParagraphClass = myCustomElementCreator({ connectedCallback, forElem: HTMLParagraphElement }); // register 2 elements customElements.define( "my-brand-new-element", myBrandNewElemClass ); customElements.define( "my-brand-new-paragraph", myBrandNewParagraphClass, { extends: `p` } ); document.body.insertAdjacentHTML( `beforeend`, `<my-brand-new-element>Hello world!</my-brand-new-element>` ); document.body.insertAdjacentHTML( `beforeend`, `<p is="my-brand-new-paragraph">Hello paragraph!</p>` ); function createCustomElementFactory() { const paramsPlaceholder = ({ get all() { return { connectedCallback, disconnectedCallback, adoptedCallback, attributeChangedCallback, observedAttributes, forElem } = {} } }); function CustomElementConstructorFactory(params = paramsPlaceholder.all) { const elemProto = params.forElem?.prototype instanceof HTMLElement ? params.forElem : HTMLElement; function CustomElementConstructor() { if (elemProto !== HTMLElement) { self = Reflect.construct( params.forElem, [], CustomElementConstructor ); return self; } return Reflect.construct( HTMLElement, [], CustomElementConstructor ); } CustomElementConstructor.prototype = elemProto.prototype; CustomElementConstructor.observedAttributes = params.observedAttributes; Object.entries( params ).forEach( ([name, cb]) => { if (cb && cb instanceof Function) { CustomElementConstructor.prototype[name] = cb; } } ); return CustomElementConstructor; } return (params = paramsPlaceholder.all) => CustomElementConstructorFactory(params); } <!-- end snippet -->
|yolo|nvidia-jetson|yolov8|yolonas|
null
Thanks to the other remarks, I got it to work inside a function as: `Split-Path -Path $script:MyInvocation.MyCommand.Path -Parent`
**Code** date = pd.to_datetime(df2.columns.droplevel(0).to_frame().assign(Day=1)) dr = pd.date_range(date.min(), date.max(), freq='MS') tuples = [(i,) + t for i in values_to_use for t in zip(dr.year, dr.month)] idx = pd.MultiIndex.from_tuples(tuples) out = df2.reindex(idx, axis=1, fill_value=0) out: [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/tyPQH.png
Assume the following HTTP response with the **Date** header (could be any other header) **starting with a space**: ``` HTTP/1.1 200 OK Server: Microsoft-IIS/10.0 Date: Sun, 29 Feb 2010 15:14:06 GMT Content-Length: 0 ``` Querying a server with such a response using the following Chilkat code snippet, ``` // other stuff CkHttpResponse* resp; // other stuff std::cout << resp->header(); ``` yields this result: ``` Server: Microsoft-IIS/10.0 Date: Sun, 29 Feb 2010 15:14:06 GMT Content-Length: 0 ``` The header() parser is appending the **Date** header to the value of the previous header, and in this case, the **Server** header. And if the **Date** header happens to be the first in the list of headers, then the header() parser trims the leading space. Note: I've tested it with the latest version, v9.5.0.97.
Chilkat CkHttpResponse header() Method is Buggy
|c++|chilkat|
I have two functions that produce two types of output one is a data frame table and the other is a plot of that dataframe. All functions take one file as input. which we load from the previous tkinter function. I would like to dynamically select a function from the radio box, next, once we select a particular function it should show a blank box that will ask for input from a user and based on input the function will get executed. ``` from tkinter import ttk import tkinter as tk root= Tk() root.geometry("800X600") root.config(bg='light blue') root.title('Dashboard') frame = tk.Frame(root,bg='light blue') frame.pack(padx=10,pady=10) file_label = tk.Label(frame,text='Input_File') file_label.grid(row=0,colulmn=0) def function_1(Input_File,Var1,Var2): # Table and Graph will be based on the above input. def function_2(Input_File,Var3,Var4,Var5,Var6): # Table and Graph will be based on the above input. root.mainloop() ``` Once we select function_1 from the radio box, then immediately we should get two boxes next to the radio box which will ask for "Var1" and "Var2". If we select function_2 then we should get four boxes next to the radio box which will ask for "Var3", "Var4", "Var5", and "Var6". Once all the input is received we should process the respective function and below we should get two outputs first "dataframe" table produced from function and plot again produced from the function. Please note "InputFile" in both the function is same as InputFile from file_lable.
Tkinter Output of Function as Table and Graph
|python|tkinter|
{"Voters":[{"Id":12344958,"DisplayName":"Eddy Sorngard"}]}
** My mobile env is Android 14. I am implementing android application that recognizing own audio source(PCM) to text. I referenced RecognizerIntent, got the EXTRA_AUDIO_SOURCE key(API 33) can be used in my case. Guide doc explain to include ParcelFileDescriptor of audio source, but It does not operate properly. (device cannot recognize my own resource but just recognize my voice through open mic.) ``` private var audioPfd : ParcelFileDescriptor? = null override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) val testFilePath = copyFiletoStorage(R.raw.test, "test.pcm") val testFile = File(testFilePath) testFile.setReadable(true) audioPfd = ParcelFileDescriptor.open(testFile, ParcelFileDescriptor.MODE_READ_ONLY) println("audioPfd size:${audioPfd?.statSize}") startSpeechToText(audioPfd) } private fun copyFiletoStorage(resourceId: Int, resourceName: String): String? { val filePath = filesDir.path + "/" + resourceName try { println("openRawResource") val `in` = resources.openRawResource(resourceId) var out: FileOutputStream? = null out = FileOutputStream(filePath) val buff = ByteArray(1024) var read = 0 try { while (`in`.read(buff).also { read = it } > 0) { out.write(buff, 0, read) } } finally { `in`.close() out.close() } } catch (e: FileNotFoundException) { e.printStackTrace() } catch (e: IOException) { e.printStackTrace() } return filePath } private fun startSpeechToText(pcmFile: ParcelFileDescriptor?) { val intent = Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH) intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM) intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault().toLanguageTag()) intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, packageName) intent.putExtra(RecognizerIntent.EXTRA_AUDIO_SOURCE_CHANNEL_COUNT, 1) intent.putExtra(RecognizerIntent.EXTRA_AUDIO_SOURCE_ENCODING, AudioFormat.ENCODING_PCM_16BIT) intent.putExtra(RecognizerIntent.EXTRA_AUDIO_SOURCE_SAMPLING_RATE, 16000) intent.putExtra(RecognizerIntent.EXTRA_AUDIO_SOURCE, pcmFile) speechRecognizer = SpeechRecognizer.createSpeechRecognizer(this, ComponentName("com.google.android.tts", "com.google.android.apps.speech.tts.googletts.service.GoogleTTSRecognitionService")) speechRecognizer?.setRecognitionListener(object : RecognitionListener { override fun onReadyForSpeech(p0: Bundle?) { println("onReadyForSpeech") } override fun onBeginningOfSpeech() { println("onBeginningOfSpeech") } override fun onRmsChanged(p0: Float) { println("onRmsChanged sound: $p0") } override fun onBufferReceived(p0: ByteArray?) { println("onBufferReceived") } override fun onEndOfSpeech() { println("onEndOfSpeech") } override fun onError(p0: Int) { println("onError:$p0") stopRecognizer() } override fun onResults(p0: Bundle?) { println("onResults: $p0") val matches = p0?.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION) if (matches != null && matches.isNotEmpty()) { val text = matches[0] Toast.makeText(applicationContext, "Recognized Text: $text", Toast.LENGTH_LONG).show() textResult?.setText(text) } } override fun onPartialResults(p0: Bundle?) { val matches = p0?.getStringArrayList(SpeechRecognizer.RECOGNITION_PARTS) println("onPartialResults:${matches?.get(0)}") } override fun onEvent(p0: Int, p1: Bundle?) { println("onEvent") } }) speechRecognizer?.startListening(intent) } ``` I cannot find any examples using EXTRA_AUDIO_SOURCE. Does Android support my requirement? Can I get any guides to implement this function? Audio source file is placed in res/raw. Thanks in advance. ** Also I tried to use EXTRA_AUDIO_INJECT_SOURCE (API 31) at Android Emulator ver S. But It doesn't operate neigther.. I put Intent including audio source uri. Is it right way..? ex) val url = Uri.parse("android.resource://" + getPackageName() + "/" + "raw" + "/" + "test.pcm") val intent = Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH) intent.putExtra(RecognizerIntent.EXTRA_AUDIO_INJECT_SOURCE, url)
How to recognize own audio source to text by EXTRA_AUDIO_SOURCE in RecognizerIntent
|android|speech-to-text|recognizer-intent|
null
i'm working with angular 17 and i have the following question: I have an IDetails interface, which has some properties: ``` export interface IDetails { summary: Summary; description: string; } ``` and I have another interface, which inherits from IDetails and implements a new property, which is specific to it. ``` export interface ICertificate extends IDetails { url: string; } ``` In my Angular component, I receive an array of this interface via input: ``` export class DetailsComponent implements OnInit { @Input({required: true}) public inputDetail: Array<IDetails> = []; } ``` Now, in html, when receiving this array, I do a for loop, and pass the url field to a child component ``` @for (item of inputDetail; track $index) { ... @defer (on viewport) { <app-validation [inputValidation]="**item?.url**"></app-validation> } @placeholder { <section> <h4>Loading...</h4> </section> } @error { <section> <h4>Error, Try Again.</h4> </section> } } ``` And it is on this line where I try to pass the url property, that the error occurs: > Property 'url' does not exist on type 'IDetails | ICertificate'. Property 'url' does not exist on type 'IDetails' How can I resolve this issue and correctly access and send the property to the child component? I tried putting two types of values into this received property. No success. ``` export class DetailsComponent implements OnInit { @Input({required: true}) public inputDetail: Array<IDetails> | Array<ICertificate> = []; } ```
Error in html when accessing a property of an inherited interface. Angular
|angular|typescript|inheritance|
null
For a little [Chisel project][1] I'm using reduce(_ ## _) function to convert an IndexedSeq to UInt. ```Scala class PdChain(n: Int = 4) extends Module { val io = IO(new Bundle { val count = Output(UInt(n.W)) }) // instantiate PdivTwo modules val pDivTwo = for (i <- 0 until n) yield { val pdivtwo = Module(new PDivTwo(i == 0)) pdivtwo } val pDivTwo_io = VecInit(pDivTwo.map(_.io)) // connect together pDivTwo_io(0).en := 1.U(1.W) for(i <- 1 until n) { pDivTwo_io(i).en := pDivTwo_io(i-1).p } val countValue = for (i <- 0 until n) yield pDivTwo_io(i).q io.count := countValue.reverse.reduce(_ ## _) } ``` The value `countValue` is seen as `IndexedSeq` and `reduce()` operation convert it to `UInt`. That works without error in my project. But [I read that][2] `reduce()` function must be used on **commutative** operations. Concatenation `##` operator **is not** commutative, then should avoid it here ? [1]: https://github.com/Martoni/PimpMyCounter [2]: https://www.geeksforgeeks.org/scala-reduce-function/
Thanks In Advance For Your Kind Help Also Appreciated the Community
null
I am calculating the next_order_date for each order in a Spark SQL query. However, when multiple orders occur on the same date for a customer, the next_order_date is not being computed accurately. I want to ensure that the next_order_date corresponds to the next distinct order date for the same customer. Example: Consider the following transaction records: | customer_key | order_id| order_date | quantity | amount | |--------------|-----------|------------|----------|--------| | 34603 | 1 | 2022-10-08 | 1 | 499 | | 34603 | 2| 2022-10-08 | 1 | 499 | | 34603 | 3| 2022-10-08 | 4 | 499 | | 34603 | 4| 2023-07-12 | 2 | 499 | | 34603 | 5| 2023-10-15 | 1 | 499 | The expected next_order_date for each order: | customer_key | order_id| order_date | quantity | amount | next_order_date | |--------------|-----------|------------|----------|--------|------------|-----------------| | 34603 | 1 | 2022-10-08 | 1 | 499 | 2023-07-12 | | 34603 | 2| 2022-10-08 | 1 | 499 | 2023-07-12 | | 34603 | 3| 2022-10-08 | 4 | 499 | 2023-07-12 | | 34603 | 4| 2023-07-12 | 2 | 499 | 2023-10-15 | | 34603 | 5| 2023-10-15 | 1 | 499 | NULL | This is what I tried - ``` sql WITH ranked_transactions AS ( SELECT t.*, DENSE_RANK() OVER (PARTITION BY t.customer_key ORDER BY t.order_date) AS order_rank FROM transaction_records t ) SELECT customer_key, order_id, order_date, quantity, amount, order_rank, LEAD(order_date) OVER (PARTITION BY customer_key, order_rank ORDER BY order_date) AS next_order_date, FROM ranked_transactions ``` But the next_order_date is calculating incorrectly. Below is the output of the code: | customer_key | order_id| order_date | quantity | amount | order_rank | next_order_date | |--------------|-----------|------------|----------|--------|------------|-----------------| | 34603 | 1 | 2022-10-08 | 1 | 499 | 1 | 2022-10-08 | | 34603 | 2| 2022-10-08 | 1 | 499 | 1 | 2022-10-08 | | 34603 | 3 | 2022-10-08 | 4 | 499 | 1 | 2023-07-12 | | 34603 | 4| 2023-07-12 | 2 | 499 | 2 | 2023-10-15 | | 34603 | 5| 2023-10-15 | 1 | 499 | 3 | NULL | Explanation: In the SQL query, I utilize a common table expression (WITH clause) named ranked_transactions to calculate the order_rank for each transaction within the same customer, ensuring that orders with the same date receive the same rank. Then, using the LEAD window function, I calculate the next_order_date by partitioning the data by customer_key and ordering it by order_date. However, this approach does not consider distinct dates for the next order. How to calculate the next_order_date correctly?
Lead function to get next distinct order date
|sql|apache-spark-sql|
FlutterCarousel( options: CarouselOptions( physics: const NeverScrollableScrollPhysics(), controller: _carouselController, onPageChanged: (index, reason) { currentView = index + 1; //setState is called to update the current page with respect to the current view setState(() {}); }, height: 50.0, indicatorMargin: 10.0, showIndicator: true, slideIndicator: CircularWaveSlideIndicator(), viewportFraction: 0.9, ), items: swipeList.map((i) { return const Text(''); }).toList(), ), [The above code outputs this kind of Carousel slider](https://i.stack.imgur.com/CY9IY.jpg) But I would like to change the look of Corousel slider like below [enter image description here](https://i.stack.imgur.com/JcIOz.jpg)
wen i mov my muos outsid the boundry i drew in tkintre it gives eror > Exception in Tkinter callback Traceback (most recent call last): File "c:\Users\My PC\AppData\Local\Programs\Python\Python312\Lib\tkinter\__init__.py", line 1962, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "c:\Windows\System32\awdkhjgeuyfsbiufhefse.py", line 4, in <lambda> K=__import__('tkinter');m=K.Tk();R,C,z,p=50,50,10,0;l=[[0]*C for _ in range(R)];c=K.Canvas(m,w=C*z,he=R*z,bg='#0c0c0c');c.pack();[c.create_line(0,i*z,C*z,i*z,fill='#1a1a1a')for i in range(R)];[c.create_line(j*z,0,j*z,R*z,fill='#1a1a1a')for j in range(C)];c.bind("<B1-Motion>",lambda e:d(e.x//z,e.y//z));m.bind("<space>",t);m.mainloop() ^^^^^^^^^^^^^^^^ File "c:\Windows\System32\awdkhjgeuyfsbiufhefse.py", line 1, in d def d(x,y):global l;l[y][x]=1;c.create_rectangle(x*z,y*z,(x+1)*z,(y+1)*z,fill='#fff',outline='') ~~~~^^^ IndexError: list assignment index out of range the cod is: ``` def d(x,y):global l;l[y][x]=1;c.create_rectangle(x*z,y*z,(x+1)*z,(y+1)*z,fill='#fff',outline='') def t(_=None):global p;p^=1;['',u()][p] def u():global l,K;l=[[1 if(l[y][x]==1 and sum(l[y+dy][x+dx]for dx,dy in((dx,dy)for dx in(-1,0,1)for dy in(-1,0,1)if(dx,dy)!=(0,0))if 0<=x+dx<C and 0<=y+dy<R)in(2,3))or(l[y][x]==0 and sum(l[y+dy][x+dx]for dx,dy in((dx,dy)for dx in(-1,0,1)for dy in(-1,0,1)if(dx,dy)!=(0,0))if 0<=x+dx<C and 0<=y+dy<R)==3)else 0 for x in range(C)]for y in range(R)];K=__import__('tkinter');c.delete("all");[c.create_line(0,i*z,C*z,i*z,fill='#1a1a1a')for i in range(R+1)];[c.create_line(j*z,0,j*z,R*z,fill='#1a1a1a')for j in range(C+1)];[[c.create_rectangle(x*z,y*z,(x+1)*z,(y+1)*z,fill='white',outline='')for x in range(C)if l[y][x]]for y in range(R)];m.after(100,u) if p else None K=__import__('tkinter');m=K.Tk();R,C,z,p=50,50,10,0;l=[[0]*C for _ in range(R)];c=K.Canvas(m,w=C*z,he=R*z,bg='#0c0c0c');c.pack();[c.create_line(0,i*z,C*z,i*z,fill='#1a1a1a')for i in range(R)];[c.create_line(j*z,0,j*z,R*z,fill='#1a1a1a')for j in range(C)];c.bind("<B1-Motion>",lambda e:d(e.x//z,e.y//z));m.bind("<space>",t);m.mainloop() ``` how fix? i tried runing the code
Overflow error when drawing out of bounds
|python|
null
Do we have support of non blocking retry functionality for KafkaStreams in Spring Kafka ? I see details mentioned in the below link related to setting it up for Kafka Listener. https://docs.spring.io/spring-kafka/reference/retrytopic/retry-topic-combine-blocking.html In case non blocking retry works with Kafka Stream, kindly guide with documentation or example.
Is it good thinks to uses `reduce(_ ## _) ` for IndexedSeq to UInt conversion in Chisel?
|scala|reduce|chisel|
In the end we did these things to make it work: - Use read queries without a transaction, so we read with SNAPSHOT Isolation - Change drop queries from fetch-and-drop to explicitly drop edges and vertices by ID in one transaction and before we have read the needed subgraph without transaction. This is currently working and we need to ensure that the data stays consistent because of not reading with transaction.
``` count = {'38,-88': 3, '39,-88': 3, '39,-87': 5, '40,-87': 4, '41,-87': 1, '38,-87': 4,}; ``` I have many items in the object. i need to access those numbers after the colon (all of them to use a specific loop on the object) i already tried dot and square brackets