instruction
stringlengths
0
30k
I'm trying to bring up a VM through `virsh define`, `virsh start`. The command doesn't show any errors and `virsh list` shows the VM is running. However, I can't access to the console, so I checked the libvirt log. There are a few errors reporting: ``` virStorageFileBackendFileRead Failed to open file '/dev/...': Permission denied ``` The `/dev/`s are the logical volumes that I created. I already use `root` as user so I'm not sure what caused the permission error. I saw this [thread][1] and it says this is related to apparmor. I tried to disable apparmor with `systemctl disable apparmor` but the permission error still remains. Any help would be appreciated. [1]: https://bugs.launchpad.net/apparmor/+bug/1825745
virStorageFileBackendFileRead Failed to open file '/dev/...': Permission denied
|virtual-machine|kvm|libvirt|
Suppose you are limited to 7 bits for a floating-point representation: 1 sign bit, 3 exponent bits, and 3 fraction bits. First I convert `3/32` to the binary `0.00011`, then to the standard scientific notation of `1.1 * 2^(-4)`. At this point I realize my exponent field will be `-1`, which is not valid. I try to represent `3/32` as `0.11 * 2^(-3)` instead, which leads to the more intuitive representation of `1 000 110`. However, obviously this is a denormalized value, and if I try to convert the representation back to decimal I get `-3/16`. My question is: is it even possible to represent this value precisely within the constraints of the problem? It looks like the smallest representable value for this scheme is `-15`, so `-3/32` falls within this interval. I'm aware that bits are dropped and precision is lost during conversions; is this the case here?
How can I remove Textfield focus when I press return or click outside Textfield?
I'm trying to solve a non-linear equation with nleqslv, and I know a priori if I want a positive or negative solution (it's a dataset on choice under risk, and I'm trying to compute the risk aversion coefficient for each individual under CRRA assumption. Since I can observe the DMs' choices, I already know if each DM is risk averse or not). Is there any way to enforce it? I know I could try with different initial values; however, I'd like to find another way as I'm using nleqslv in a for loop (I must compute one solution for each observation) and I can't find an initial guess that works for everyone. My code is the following: `bernoulli <- function(x, r) { ifelse(x>=0,((x+1)^(1-r)-1)/(1-r), -((-x+1)^(1-r)-1)/(1-r) ) } bernoulli.log <- function(x) { ifelse(x>=0, log(x+1), -log(-x+1) )} mydata$alpha_crra <- NA for (i in 1:nrow(mydata)) { indiff.eq.crra <- function(r) { return(bernoulli(mydata$CE[i], r) - mydata$p[i]*bernoulli(mydata$win[i], r) - (1-mydata$p[i])*bernoulli(mydata$lose[i], r)) } mydata$alpha_crra[i] <- ifelse(mydata$riskneutral[i] == 1, 0, ifelse(abs(bernoulli.log(mydata$CE[i]) - mydata$p[i]*bernoulli.log(mydata$win[i]) - (1-mydata$p[i])*bernoulli.log(mydata$lose[i])) < 0.001 & mydata$riskaverse[i] == 1, 1, nleqslv(-5, indiff.eq.crra)$x)) }` where: > mydata$win[i]= the high payoff of the lottery (can change depending on the observation) > mydata$lose[i]= the low payoff of the lottery > mydata$p[i] = probability to win the high payoff > mydata$CE[i]= the Certainty Equivalent stated by DM i > mydata$riskneutral[i] = a dummy variable = 1 if i is risk neutral, 0 otherwise > mydata$riskaverse[i] = a dummy variable = 1 if i is risk averse, 0 otherwise
I am new to Rust but recently came across a problem I don't know how to solve. It's to work with nested (and multi-dimensional) value-key pair arrays in Rust that is dynamically generated based on string splitting. The sample dataset looks something like the example below: Species | Category Dog | Eukaryota, Animalia, Chordata, Mammalia, Carnivora, Canidae, Canis, C. familiaris Cat | Eukaryota, Animalia, Chordata, Mammalia, Carnivora, Feliformia, Felidae, Felinae, Felis, F. catus Bear | Eukaryota, Animalia, Chordata, Mammalia, Carnivora, Ursoidea, Ursidae, Ursus ... The goal would be to split the comma delimitation and a create a map or vector. Essentially, creating "layers" of nested keys (either as a Vector array or a key to a *final value*). From my understanding, Rust has a crate called "serde_json" which can be *used* to create key-value pairings like so: let mut array = Map::new(); for (k, v) in data.into_iter() { array.insert(k, Value::String(v)); } As for comma delimited string splitting, it might look something like this: let categories = "a, b, c, d, e, f".split(", "); let category_data = categories.collect::<Vec<&str>>() However, the end goal would be to create a recursively nested map or vector that follows the *Category* column for the array which can ultimately be serialised to a json output. How would this be implemented in Rust? In addition, while we might know the number of rows in the sample dataset, isn't it quite resource intensive to calculate all the "comma-delimited layers" in the *Category* column to know the final size of the array as required by Rust's memory safe design to "initialize" an array by a defined or specified size? Would this need to specifically implemented as way to know the maximum number of layers in order to be doable? Or can we implement an infinitely nested multi-dimensional array without having to specify or initialise a defined map or vector size? For further reference, in PHP, this might be implemented so: $output_array = array(); foreach($data_rows as $data_row) { $temp =& $output_array; foreach(explode(', ', $data_row["Category"]) as $key) { $temp =& $temp[$key]; } // Check if array is already initialized, if not create new array with new data if(!isset($temp)) { $temp = array($data_row["Species"]); } else { array_push($temp, $data_row["Species"]); } } How would a similar solution like this be implemented in Rust? Thanks in advance!
In Rust, possible to downcast a trait to an owned type?
|rust|lifetime|
Style your text inputs with a [color][1] of your choice. In your CSS, add a rule like this: input[type=text] { color: white; } [1]: https://www.w3schools.com/css/css_form.asp
If A Change the Font Size in other page . It will automatically change in the Default Font Size I will change the font in Settings .It will be Update in My database And The database Font Size Is Fetch. that fetch font size i have to given in My texarea
i want to change the font size in web application using c# code
|.net|web|
null
How to write multiline text to textarea in Laravel
|php|laravel|
From my understanding, all four of these methods: `predict`, `predict_on_batch`, `predict_step`, and a direct forward pass through the model (e.g. `model(x, training=False)` or `__call__()`) should all give the same results, some are just more efficient than others in how they handle batches of data versus one sample. But I am actually getting different results on an image super-resolution (upscaling) task I'm working on: for lowres, _ in val.take(1): # Get a randomly cropped region of the lowres image for upscaling lowres = tf.image.random_crop(lowres, (150, 150, 3)) # uint8 # Need to add a dummy batch dimension for the predict step model_inputs = tf.expand_dims(lowres, axis=0) # (1, 150, 150, 3), uint8 # And convert the uint8 image values to float32 for input to the model model_inputs = tf.cast(model_inputs, tf.float32) # float32 preds = model.predict_on_batch(model_inputs) min_val = tf.reduce_min(preds).numpy() max_val = tf.reduce_max(preds).numpy() print("Min value: ", min_val) print("Max value: ", max_val) preds = model.predict(model_inputs) min_val = tf.reduce_min(preds).numpy() max_val = tf.reduce_max(preds).numpy() print("Min value: ", min_val) print("Max value: ", max_val) preds = model.predict_step(model_inputs) min_val = tf.reduce_min(preds).numpy() max_val = tf.reduce_max(preds).numpy() print("Min value: ", min_val) print("Max value: ", max_val) preds = model(model_inputs, training=False) # __call__() min_val = tf.reduce_min(preds).numpy() max_val = tf.reduce_max(preds).numpy() print("Min value: ", min_val) print("Max value: ", max_val) Prints: Min value: -6003.622 Max value: 5802.6826 Min value: -6003.622 Max value: 5802.6826 Min value: -53.7696 Max value: 315.1499 Min value: -53.7696 Max value: 315.1499 Both `predict_step` and `__call__()` give the "correct" answers as defined by the upscaled images look correct. I'm happy to share more details on the model if that's helpful, but for now I thought I'd just leave it at this to not overcomplicate the question. At first I wondered if these methods had different results based on training/inference modes, but my model doesn't use any `BatchNorm` or `Dropout` layers, so that shouldn't make a difference here. It's completely composed of: `Conv2D`, `Add`, `tf.nn.depth_to_space` (pixel shuffle), and `Rescaling` layers. That's it. It also doesn't use any subclassing or override any methods, just uses `keras.Model(inputs, outputs)`. Any ideas why these prediction methods would give different answers? **EDIT:** I've been able to create a minimally reproducible example where you can see the issue. Please see: https://www.kaggle.com/code/quackaddict7/really-minimum-reproducible-example I initially couldn't reproduce the problem in a minimal example. I eventually added back in a dataset, batching, data augmentation, training, model file saving/restoring, and eventually discovered **the issue is GPU vs. CPU!** So I took all that back out for my minimal example. If you run the notebook attached you'll see that on CPU, all four methods give the same answer with randomly initialized weights. But if you change to P100 GPU, `predict`/`predict_on_batch` differ from `predict_step`/forward pass (`__call__`). So I guess at this point, my question is, why are CPU vs. GPU results different here?
Header dateClick is not firing/working in full calendar v5
|fullcalendar-5|
I'm looking to setup alarms based on thresholds of particular log events, such as "failed logins". I have an Insight query that returns all my log entries I'm interested in. Is there a way I can setup metrics and alarms based on Insight queries? I found an editor to do this in Metrics but unable to save it, and can't select my log groups. I feel like I'm missing something. Thanks! My example Insights query here: fields @timestamp, @message | filter @message LIKE "User login failed" | parse @message "* * [*] *" as date, level, object, message | sort @timestamp desc I tried adding this query to the editor in Metrics but it won't save.
Setting up alarms for Cloudwatch Insight Queries
|amazon-cloudwatch|aws-cloudwatch-log-insights|cloudwatch-alarms|
null
At this date you can only run JavaScript, TypeScript or Python code to be run on Cloud Functions. Consequently, you shall convert it to one of those languages. You can use the **[Dart compile to JS compiler][1]** to convert and generate JS code from you Dart code. ```console dart compile js ``` Beware that this compiles your Dart code to deployable JavaScript, which may not always be compatible with NodeJs. [1]: https://dart.dev/tools/dart-compile#js
|database|character|converters|abap|
I fixed the error by finding out the $HOME/.buildozer/android/platform/ has a zip file, then i just unziped it and it worked. hope this helped.
I fixed the error by finding out the $HOME/.buildozer/android/platform/android-sdk has a zip file, then i just unziped it and it worked. hope this helped.
I'm learning spring boot & microservices, I have created 3 services one is running on port 9000 2nd on 8080 but while building 3rd I've tried using many other ports like 6341, 5030, 4661 and even tried using random port number but I'm encountering ECONREFUSED error in Postman I did try listing all the LISTENING ports in my project but I'm still facing the same problem, please tell me how to determine which port should be used
How to choose port number for various microservices? whatever port number I use is already used-blocked or I'm not able to use them
|java|mongodb|spring-boot|microservices|port|
null
|sapui5|
**I try to provide some more kontext:** ultralytics_crop_objects is a list with like 20 numpy.ndarray, which are representing pictures (59, 381, 3) e.g.:[ultralytics_crop_objects[5]](https://i.stack.imgur.com/x6QvJ.png). I started passing a single picture out of the list to recognize. pipeline.recognize([ultralytics_crop_objects[5]]) --> ji856931 The result is "ji856931". So not all characters where detected. But when I pass the entire list of pictures and look at the result for the 6th picture, the result is different. See: [Different Results][1] results = pipeline.recognize(ultralytics_crop_objects) results[5] --> ji8569317076 I don't understand it at all. I would be super happy if someone could provide a hint. My only explanation would be that Keras OCR is using a different detection threshold for a single picture than for a list of more than one picture. Could that be the case? I have checked multiple times to ensure that I did not accidentally use another pipeline or that the input pictures are different. However, they are the same. I have also done extensive research online. Heres the complete Code: import keras_ocr pipeline = keras_ocr.pipeline.Pipeline() results = pipeline.recognize([ultralytics_crop_objects[5]]) print(results) results = pipeline.recognize(ultralytics_crop_objects) print(results[5])
the problem you are facing is cors restriction.: Ensure that your API server is configured to allow requests from the domain or IP address you are using. If CORS is not properly configured, the browser may block requests from different origins. you can also download allow cors extensions from chrome web store. thanks
I'm developing a mobile application in React Native. The application has the `<TextInput/>` for entering SMS codes: ```js const [pinCode, setPinCode] = useState(''); const pinCodeRef = useRef(); // crutch so that the user's keyboard opens automatically useEffect(() => { const timeout = setTimeout(() => { pinCodeRef.current.focus(); }, 400); return () => clearTimeout(timeout); }, []); ... <TextInput placeholder="Enter pin-cpde" placeholderTextColor={commonStyles.colors.label} value={pinCode} onChangeText={val => setPinCode(val)} maxLength={6} ref={pinCodeRef} keyboardType="number-pad" autoComplete="sms-otp" textContentType="oneTimeCode" /> ``` I use properties `keyboardType="number-pad" autoComplete="sms-otp" textContentType="oneTimeCode"` to have the keyboard prompt the user for an SMS code. **The problem is that 6-digit codes on IOS are not prompted, while 4-digit codes work correctly. On Android everything works correctly.** About app: ``` System: OS: macOS 13.4 CPU: (8) arm64 Apple M1 Pro Memory: 141.94 MB / 16.00 GB Shell: 5.9 - /bin/zsh Binaries: Node: 16.18.0 - ~/.nvm/versions/node/v16.18.0/bin/node Yarn: 1.22.19 - /opt/homebrew/bin/yarn npm: 8.19.2 - ~/.nvm/versions/node/v16.18.0/bin/npm Watchman: Not Found Managers: CocoaPods: 1.11.2 - /Users/alexander/.rvm/rubies/ruby-2.7.4/bin/pod SDKs: iOS SDK: Platforms: DriverKit 22.2, iOS 16.2, macOS 13.1, tvOS 16.1, watchOS 9.1 Android SDK: Not Found IDEs: Android Studio: 2022.2 AI-222.4459.24.2221.10121639 Xcode: 14.2/14C18 - /usr/bin/xcodebuild Languages: Java: 11.0.16.1 - /usr/bin/javac npmPackages: @react-native-community/cli: Not Found react: 17.0.2 => 17.0.2 react-native: 0.67.4 => 0.67.4 react-native-macos: Not Found npmGlobalPackages: *react-native*: Not Found ``` Libraries (for example `react-native-otp-auto-fill`) did not cope with the code hint (on IOS and ob Android). Only the properties mentioned above helped me. P.S. The SMS text is the same (only the code length differs)
I have multiple points (lon, lat) for a range of countries. These points are related by an ID that when connected creates a line. aus: id country point_id lon lat 1 Australia 0 130.1491 -19.57520 1 Australia 1 129.9958 -19.48760 1 Australia 2 129.7156 -19.25788 1 Australia 3 129.7104 -19.20223 2 Australia 0 129.2510 -18.59016 2 Australia 1 129.5436 -18.30723 3 Australia 0 137.2840 -20.06129 3 Australia 1 137.2865 -20.04308 3 Australia 2 137.1915 -20.00782 3 Australia 3 137.1220 -19.97166 3 Australia 4 137.0650 -19.91363 3 Australia 5 136.8961 -19.85932 4 Australia 0 136.8961 -19.85932 4 Australia 1 136.8791 -19.88669 4 Australia 2 136.8594 -19.91227 4 Australia 3 136.8454 -19.92507 4 Australia 4 136.8360 -19.92976 I managed to create the geometry following this post [[1][1]], however, I think my attempt can be automated further. I attempted using group by individually, as follow: # same logic as in [1] but by group aus_group <- aus %>% group_by(id, country) b_group <- aus %>% group_by(id, country) %>% select(lon, lat) e_group <- aus %>% group_by(id, country) %>% filter(row_number()!=1) %>% select(lon, lat) f_group <- e_group %>% group_by(id, country) %>% summarise_all(last) %>% select(country, lon, lat) g_group <- e_group %>% group_by(id, country) %>% rbind(f_group) %>% arrange(id) aus_group$geometry = do.call( "c", lapply(seq(nrow(b_group)), function(i) { st_sfc( st_linestring( as.matrix( rbind(b_group[i, c("lon", "lat")], g_group[i, c("lon", "lat")]) ) ), crs = 4326 ) })) dat_g_sf = st_as_sf(aus_group) mapview(dat_g_sf, zcol = "id") # to compare with approach in [1] dat_g_sf %>% filter(id==1) %>% mapview(zcol = "point_id") # got same plot # so it works I share this part in case there are suggestions to improve the group_by part My actual question is regarding distance estimation between points and the relative elevation between those points. For the distance estimation, I have tried this: aus_2 <- aus_1 %>% select(id, country, point_id, lon, lat) #to avoid error with geometry str(aus_2) aus_2$distance = do.call( "c", lapply(seq(nrow(b)), function(i) { st_distance( st_sfc(st_point (as.matrix(rbind(b[i, ], g[i, ]), by_element = TRUE) ) ), crs = 4326 ) })) Error in st_point(as.matrix(rbind(b[i, ], g[i, ]), by_element = TRUE)) : nrow(x) == 1 is not TRUE I think the error might be related with the matrix creation. I appreciate any help here. For the relative elevation estimation, I haven’t found previous work on this using R. Suggestions regarding where to start about elevation estimation are welcome. Also, what is the meaning of "c" in [1]? How does it work in the function? [1]: https://stackoverflow.com/questions/55187057/connecting-two-sets-of-coordinates-to-create-lines-using-sf-mapview
{"Voters":[{"Id":4850040,"DisplayName":"Toby Speight"},{"Id":2864275,"DisplayName":"Iłya Bursov"},{"Id":139985,"DisplayName":"Stephen C"}],"SiteSpecificCloseReasonIds":[18]}
DataMapper ---------- > An Object Relational Mapper written in PHP for CodeIgniter. It is designed to map your Database tables into easy to work with objects, fully aware of the relationships between each other. Website: http://datamapper.wanwizard.eu/ Gas ORM ------- > A lightweight and easy-to-use ORM for CodeIgniter. Gas was built specifically for CodeIgniter app. It uses CodeIgniter Database packages, a powerful DBAL which support numerous DB drivers. Gas ORM provide a set of methods that will map your database tables and its relationship, into accessible object. Website: https://github.com/toopay/gas-orm Doctrine ---------- Website: http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/ **NOTE**: You must do some workaround to integrate this with CI, try [here][1]. **EDIT**: Doctrine, [integrating with CodeIgniter][2] (working URL). *This might not work for all CodeIgniter versions and may require slight adjustments.* [1]: http://docs.doctrine-project.org/en/2.0.x/cookbook/integrating-with-codeigniter.html [2]: http://doctrine-orm.readthedocs.org/projects/doctrine-orm/en/latest/cookbook/integrating-with-codeigniter.html
I'm trying to solve a non-linear equation with nleqslv, and I know a priori if I want a positive or negative solution (it's a dataset on choice under risk, and I'm trying to compute the risk aversion coefficient for each individual under CRRA assumption. Since I can observe the DMs' choices, I already know if each DM is risk averse or not). Is there any way to enforce it? I know I could try with different initial values; however, I'd like to find another way as I'm using nleqslv in a for loop (I must compute one solution for each observation) and I can't find an initial guess that works for everyone. My code is the following: `bernoulli <- function(x, r) { ifelse(x>=0,((x+1)^(1-r)-1)/(1-r), -((-x+1)^(1-r)-1)/(1-r) ) } bernoulli.log <- function(x) { ifelse(x>=0, log(x+1), -log(-x+1) )} mydata$alpha_crra <- NA for (i in 1:nrow(mydata)) { indiff.eq.crra <- function(r) { return(bernoulli(mydata$CE[i], r) - mydata$p[i]*bernoulli(mydata$win[i], r) - (1-mydata$p[i])*bernoulli(mydata$lose[i], r)) } mydata$alpha_crra[i] <- ifelse(mydata$riskneutral[i] == 1, 0, ifelse(abs(bernoulli.log(mydata$CE[i]) - mydata$p[i]*bernoulli.log(mydata$win[i]) - (1-mydata$p[i])*bernoulli.log(mydata$lose[i])) < 0.001 & mydata$riskaverse[i] == 1, 1, nleqslv(-5, indiff.eq.crra)$x)) }` where: > mydata$win[i]= the high payoff of the lottery (can change depending on the observation) > mydata$lose[i]= the low payoff of the lottery > mydata$p[i] = probability to win the high payoff > mydata$CE[i]= the Certainty Equivalent stated by DM i > mydata$riskneutral[i] = a dummy variable = 1 if i is risk neutral, 0 otherwise > mydata$riskaverse[i] = a dummy variable = 1 if i is risk averse, 0 otherwise
Just found out that the GC is not all that great at handling cyclic references in some cases... Use a WeakReference<RefType> and this creates a dead end anyway. Coming from C++ its been a bumpy road with the GC. In c++ unique and shared pointer are just more powerful. GC is ok but then... meh... :)
I have a User entity which has one many to many relationship with Authority table and many to one with UserGroup and another many to one with company. I am using a Projection Interface named UserDto and trying to fetch all users with the help of entity graph with attributePaths as authorities, company and userGroup but before spring 3 it always gave me distinct parents but now it is doing cartesian product. I read somewhere that Hibernate 6 automatically de-duplicates the result set. So i tried writing a jpa query by myself using join fetch and adding distinct key word and used User entity instead of project but same result. @Entity @Table(name = "USERS") public class User extends Cacheable<Integer, User> implements AuditableEntity { private static final long serialVersionUID = -4759265801462008942L; @Id @Column(name = "USER_ID", nullable = false) @TableGenerator(name = "USER_ID", table = "ID_GENERATOR", pkColumnName = "GEN_KEY", valueColumnName = "GEN_VALUE", pkColumnValue = "USER_ID", allocationSize = 10) @GeneratedValue(strategy = GenerationType.TABLE, generator = "USER_ID") private Integer id; @Column(name = "EMAIL_ID", unique = true, nullable = false, length = 254) @FieldDescription(name = "Email ID", order = 2, type = ExcelColumnType.STRING) private String emailId; @JsonIgnore @Column(name = "PASSWORD", nullable = false, length = 60) private String password; @Column(name = "ENABLED", nullable = false) private boolean enabled = false; @Column(name = "LOCKED", nullable = false) private boolean locked = false; @ManyToMany(fetch = FetchType.EAGER) @JoinTable( name = "USER_AUTHORITY", joinColumns = {@JoinColumn(name = "USER_ID", referencedColumnName = "USER_ID")}, inverseJoinColumns = {@JoinColumn(name = "AUTHORITY_NAME", referencedColumnName = "NAME")}) @NotNull @FieldDescription(name = "Authorities", order = 3, type = ExcelColumnType.AUTHORITY) private Set<Authority> authorities = new HashSet<>(); @NotEmpty @Size(max = 60) @Column(name = "FIRST_NAME", length = 60) @FieldDescription(name = "First Name", order = 0, type = ExcelColumnType.STRING, breakIf = { "" }, required = true) private String firstName; @Column(name = "MIDDLE_NAME", length = 60) private String middleName; @NotEmpty @Size(max = 60) @Column(name = "LAST_NAME", length = 60) @FieldDescription(name = "Last Name", order = 1, type = ExcelColumnType.STRING, required = true) private String lastName; @Column(name = "PHONE_NO", length = 30) private String phoneNo; @Column(name = "MOBILE_NO", length = 20) private String mobileNo; @Column(name = "FAX_NO", length = 20) private String faxNo; @Column(name = "LOCALE", length = 10) private Locale locale = Locale.UK; @Column(name = "LAST_UPDATE_SEEN") private Date lastUpdateSeen; @Column(name = "SIGN_UP_DATE") private Date signUpDate; //Setting it to eager as this will always be needed. @NotNull @ManyToOne(fetch = FetchType.EAGER) @JoinColumn(name = "COMPANY_ID") @FieldDescription(name = "Company", order = 4, type = ExcelColumnType.COMPANY, required = true) private Company company; @ManyToOne @JoinColumn(name = "USER_GROUP_ID") private UserGroup userGroup; @Column(name = "STRATEGY_HEAD") private Boolean strategyHead = false; @Column(name = "OAUTH_SERVER_ID", length = 60) private String oauthServerId; @Column(name = "PICTURE_FILE_NAME", length = 100) private String pictureFileName; @JsonProperty(access = Access.WRITE_ONLY) @OneToMany(mappedBy = "user", cascade = CascadeType.ALL) @OrderBy("requestSentDate DESC") private List<ForgotPassword> forgotPassword = new ArrayList<>(); @Column(name = "DISABLE_MFA") private Boolean disableMfA; @Column(name = "DISABLE_IP_RESTRICTION") private Boolean disableIpRestriction; @Column(name = "LAST_UPDATED_PASSWORD") private LocalDateTime lastUpdatedPassword; @Column(name = "PRIME_USER") @FieldDescription(name = "Prime User", order = 6, type = ExcelColumnType.BOOLEAN, convertBoolToYN = true) private Boolean primeUser = false; @Transient @FieldDescription(name = "User Group", order = 5, type = ExcelColumnType.STRING, required = true) private String userGroupName; //hashcode and equals based on id } Here is projection interface public interface UserDto { Integer getId(); String getEmailId(); String getFirstName(); String getLastName(); @Value("#{target.firstName + ' ' + target.lastName}") String getFullName(); Company getCompany(); @Value("#{target.userGroup.id}") Integer getUserGroupId(); @Value("#{target.userGroup.name}") String getUserGroupName(); Boolean getEnabled(); Set<Authority> getAuthorities(); Locale getLocale(); String getPictureFileName(); String getOauthServerId(); Boolean getLocked(); Boolean getDisableMfA(); Boolean getDisableIpRestriction(); @Value("#{target.oauthServerId != null}") boolean isSsoUser(); LocalDateTime getLastUpdatedPassword(); Date getSignUpDate(); Boolean getPrimeUser(); } This is the repository @Repository public interface UserRepository extends JpaRepository<User, Integer>, CacheableRepository<User, Integer> { //other methods.. @EntityGraph(attributePaths = { "authorities", "company", "userGroup" }) <E> List<E> findBy(Class<E> type); //another way //@Query("Select u from User u left join fetch u.authorities a left join fetch u.company c left join fetch u.userGroup") // @QueryHints(value = { @QueryHint(name = org.hibernate.jpa.QueryHints.HINT_PASS_DISTINCT_THROUGH, value = "false")}) //List<User> findBy(); } Service @Transactional public List<UserDto> getAll() { return userRepository.findBy(UserDto.class); }
I'm using roproxy instead of roblox to get around cord measures, requestbody is username,ctype ,password and useridand for the headers I'm just doing xcrs token:xcrs. It's just returning 403(). Is this an xcrs token thing? Couldn't find anything about it online. Ideally, it would print the incorect password error, or anything but 403() 0 heres the code im using const passwordcheck = { ctype: "Username", cvalue: "usernametomyaccount", password: "passwordtomyaccount", captchaToken: "CaptchaFromFuncaptcha", captchaId: "Captcha", captchaprovider: "PROVIDER_ARKOSE_LABS", userId: 889124, }; const request = { method: "POST", headers: { "Content-type": "application/json", "X-csrf-token": "wiXzYTbS/xF3", Cookie: "", }, body: JSON.stringify(passwordcheck), }; the url is roproxy v2 login I've tried using different proxies, but nothing came off it. Filling out the requestbody with all the stuff in the Roblox auth documentation didn't work either.
I am trying to use ARMCC Keil toolchain with CMake in vscode. I copied asm/compiler/linker flag from a working Keil project, except that I don't use .crf and .d files everywhere. I have a strange behavior while being in debug with Cortex-debug extension in vscode. It compiles, links, but I noticed that when a function pointer is initialized at NULL, its value is evaluated at 0xffffffff. To witness this more precisely (the application I am trying to build is quite big), I added code at the very beginning of main: ``` typedef void (*function_pointer)(void); static function_pointer function = NULL; /* ------------------------------------------------------------------- */ /**. * @brief Function for application main entry. */ int main(void) { if(function != NULL) { function = (function_pointer)NULL; function(); } . . . ``` Debugger breaks within the if statement (so function is indeed not NULL while it should) After the NULL assignement, function is not assigned. When I add a preprocess flag, I can be sure that NULL is expanded to 0 : ``` typedef void (*function_pointer)(void); static function_pointer function = 0; int main(void) { if(function != 0) { function = (function_pointer)0; function(); } ``` Note that I generate a .elf file with CMake but Keil generate a .axf file (which should be the same, as ELF files are standard). I tried to generate.crf files and .d files but I am not sure they're needed with Cortex-debug vscode extension. Also, I noticed that the programming behavior changes wether I program via Keil or via vscode. It takes much less time in vscode and the led on my debug board that blink sometimes with Keil doesn't light with vscode. I can provide all details (CMake generation/compiler/linker commands, scatter file, debug launch command etc...). It is just that I wanted the question to be clear, and not provide 100+ lines of code, but feel free to ask anything Thank you for your help
Estimating distance between points and its relative elevation for multiple countries and geometries, using R sf
|r|geospatial|elevation|
Navigate to Environment Variables. Edit PATH. If you have multiple paths there for multiple python versions, make sure the one you would like to be used by default is above PATH for other versions. Keep check of hierarchy of PATH in both User Variables and System Variables.
I see two options: Using manual computation of a [timedelta](https://pandas.pydata.org/docs/reference/api/pandas.to_timedelta.html): ``` df = pd.DataFrame({'date': pd.date_range('2024-01-01', '2024-01-10')}) df['next_Monday'] = df['date'].add(pd.to_timedelta(7-df['date'].dt.dayofweek, unit='D')) ``` Or with an [`Week`](https://pandas.pydata.org/docs/reference/api/pandas.tseries.offsets.Week.html) offset: ``` df = pd.DataFrame({'date': pd.date_range('2024-01-01', '2024-01-10')}) df['next_Monday'] = df['date'].add(pd.offsets.Week(n=1, weekday=0)) ``` Output: ``` date next_Monday 0 2024-01-01 2024-01-08 1 2024-01-02 2024-01-08 2 2024-01-03 2024-01-08 3 2024-01-04 2024-01-08 4 2024-01-05 2024-01-08 5 2024-01-06 2024-01-08 6 2024-01-07 2024-01-08 7 2024-01-08 2024-01-15 8 2024-01-09 2024-01-15 9 2024-01-10 2024-01-15 ``` If you only want to consider a next day when the day is over (i.e. if we are a Monday, keep the current date): ``` df['next_Monday'] = df['date'].add(pd.offsets.Week(n=0, weekday=0)) date next_Monday 0 2024-01-01 2024-01-01 1 2024-01-02 2024-01-08 2 2024-01-03 2024-01-08 3 2024-01-04 2024-01-08 4 2024-01-05 2024-01-08 5 2024-01-06 2024-01-08 6 2024-01-07 2024-01-08 7 2024-01-08 2024-01-08 8 2024-01-09 2024-01-15 9 2024-01-10 2024-01-15 ```
I have an application to predict the size of a fish in an image. I have built a FastAPI endpoint --`/predict/`-- that runs the multi-step process to make that prediction. The steps include two calls to external APIs (not under my control, so I can't see more than what they return). When I run my code just from the script, such as through an IDE (I use PyCharm), the code for the prediction steps runs correctly and I get appropriate responses back from both APIs. The first is to [Roboflow][1], and here is an example of the output from running the script (again, I just call this from the command line or hit Run in Pycharm): 2024-03-30 10:59:36,073 - DEBUG - Starting new HTTPS connection (1): detect.roboflow.com:443 2024-03-30 10:59:36,339 - DEBUG - https://detect.roboflow.com:443 "POST /fish_measure/1?api_key=AY3KX4KMynZroEOyXUEb&disable_active_learning=False HTTP/1.1" 200 914 The second is to [Fishial][2], and here is an example of the output from running the script (script or through PyCharm), where this one has to get the token, url, etc: 2024-03-30 11:02:31,866 - DEBUG - Starting new HTTPS connection (1): api-users.fishial.ai:443 2024-03-30 11:02:33,273 - DEBUG - https://api-users.fishial.ai:443 "POST /v1/auth/token HTTP/1.1" 200 174 2024-03-30 11:02:33,273 - INFO - Access token: eyJhbGciOiJIUzI1NiJ9.eyJleHAiOjE3MTE4MTE1NTMsImtpZCI6ImIzZjNiYWZlMTg2NGNjYmM3ZmFkNmE5YSJ9.YtlaecKMyxjipBDS97xNV3hYKcF3jRpOxTAVnwrxOcE 2024-03-30 11:02:33,273 - INFO - Obtaining upload url... 2024-03-30 11:02:33,582 - DEBUG - Starting new HTTPS connection (1): api.fishial.ai:443 2024-03-30 11:02:33,828 - DEBUG - https://api.fishial.ai:443 "POST /v1/recognition/upload HTTP/1.1" 200 1120 2024-03-30 11:02:33,829 - INFO - Uploading picture to the cloud... 2024-03-30 11:02:33,852 - DEBUG - Starting new HTTPS connection (1): storage.googleapis.com:443 2024-03-30 11:02:34,179 - DEBUG - https://storage.googleapis.com:443 "PUT /backend-fishes-storage-prod/6r9p24qp4llhat8mliso8xacdxm5?GoogleAccessId=services-storage-client%40ecstatic-baton-230905.iam.gserviceaccount.com&Expires=1711811253&Signature=gCGPID7bLuw%2FzUfv%2FLrTRPeQA060CaXQEqITPvW%2FWZ5GHXYKDRNCxVrUJ7UmpHVa0m60gIMFwFSQhYqsDmP3SkjI7ZnJSIEj53zxtOpcL7o2VGv6ZUuoowWwzmzqeM9yfbCHGI3TmtuW0lMhqAyi6Pc0wYhj73P12QU28wF8sdQMblHQLQVd1kFXtPl5yjSW12ADt4WEvB7dbnl7HmUTcL8WFS2SnJ1zcLljIbXTlRWcqc88MIcklSLG69z%2FJcUSh%2BeNxRp%2Fzotv5GitJBq9pF%2BzRt25lCt%2BYHGViJ46uu4rQapZBfACxsE762a1ZcrvTasy97idKRaijLJKAtZBRQ%3D%3D HTTP/1.1" 200 0 2024-03-30 11:02:34,180 - INFO - Requesting fish recognition... 2024-03-30 11:02:34,182 - DEBUG - Starting new HTTPS connection (1): api.fishial.ai:443 2024-03-30 11:02:39,316 - DEBUG - https://api.fishial.ai:443 "GET /v1/recognition/image?q=eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaHBBMksyUEE9PSIsImV4cCI6bnVsbCwicHVyIjoiYmxvYl9pZCJ9fQ==--d37fdc2d5c6d8943a59dbd11326bc8a651f9bd69 HTTP/1.1" 200 10195 Here is the code for the endpoint: from fastapi import FastAPI, File, UploadFile, HTTPException, BackgroundTasks from fastapi.middleware.cors import CORSMiddleware from pydantic import BaseModel from typing import Union class PredictionResult(BaseModel): prediction: Union[float, str] eyeball_estimate: Union[float, str] species: str elapsed_time: float @app.post("/predict/", response_model=PredictionResult) async def predict_fish_length(file: UploadFile = File(...)): try: # capture the start of the process so we can track duration start_time = time.time() # Create a temporary file temp_file = tempfile.NamedTemporaryFile(delete=False) temp_file_path = temp_file.name with open(temp_file_path, "wb") as buffer: shutil.copyfileobj(file.file, buffer) temp_file.close() prediction = process_one_image(temp_file_path) end_time = time.time() # Record the end time elapsed_time = end_time - start_time # Calculate the elapsed time return PredictionResult( prediction=prediction["prediction"][0], eyeball_estimate=prediction["eye_ratio_len_est"][0], species=prediction["species"][0], elapsed_time=elapsed_time ) except Exception as e: # Clean up the temp file in case of an error os.unlink(temp_file_path) raise HTTPException(status_code=500, detail=str(e)) from e I run this through `uvicorn`, then try to call the endpoint through `curl` as follows: curl -X POST http://127.0.0.1:8000/predict/ -F "file=@/path/to/image.jpg" The Roboflow API calls work fine, but now I get this response from the Fishial (second) API: 2024-03-30 10:48:09,166 - DEBUG - Starting new HTTPS connection (1): api.fishial.ai:443 2024-03-30 10:48:10,558 - DEBUG - https://api.fishial.ai:443 "GET /v1/recognition/image?q=eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaHBBMWkyUEE9PSIsImV4cCI6bnVsbCwicHVyIjoiYmxvYl9pZCJ9fQ==--36e68766cd891eb0e57610e8fb84b76e205b639e HTTP/1.1" 500 89 INFO: 127.0.0.1:49829 - "POST /predict/ HTTP/1.1" 500 Internal Server Error I'm not sure where to look, or perhaps what to print out/log, in order to get more information. I'm not even sure if the error is on my side or coming from the API I'm calling (though the `500 89` end of the GET line at the end makes me think it's coming from the API I'm calling). Many thanks! **EDIT**: A request was made for more code. The function to process an image is just a series of calls to other functions. So I've included here only the code I use to call the second (Fishial) API: def recognize_fish(file_path, key_id=key_id, key_secret=key_secret, identify=False): if not os.path.isfile(file_path): err("Invalid picture file path.") for dep in DEPENDENCIES: try: __import__(dep) except ImportError: err(f"Unsatisfied dependency: {dep}") logging.info("Identifying picture metadata...") name = os.path.basename(file_path) mime = mimetypes.guess_type(file_path)[0] size = os.path.getsize(file_path) with open(file_path, "rb") as f: csum = base64.b64encode(hashlib.md5(f.read()).digest()).decode("utf-8") logging.info(f"\n file name: {name}") logging.info(f" MIME type: {mime}") logging.info(f" byte size: {size}") logging.info(f" checksum: {csum}\n") if identify: return if not key_id or not key_secret: err("Missing key ID or key secret.") logging.info("Obtaining auth token...") data = { "client_id": key_id, "client_secret": key_secret } response = requests.post("https://api-users.fishial.ai/v1/auth/token", json=data) auth_token = response.json()["access_token"] auth_header = f"Bearer {auth_token}" logging.info(f"Access token: {auth_token}") logging.info("Obtaining upload url...") data = { "blob": { "filename": name, "content_type": mime, "byte_size": size, "checksum": csum } } headers = { "Authorization": auth_header, "Content-Type": "application/json", "Accept": "application/json" } response = requests.post("https://api.fishial.ai/v1/recognition/upload", json=data, headers=headers) signed_id = response.json()["signed-id"] upload_url = response.json()["direct-upload"]["url"] content_disposition = response.json()["direct-upload"]["headers"]["Content-Disposition"] logging.info("Uploading picture to the cloud...") with open(file_path, "rb") as f: requests.put(upload_url, data=f, headers={ "Content-Disposition": content_disposition, "Content-MD5": csum, "Content-Type": "" }) logging.info("Requesting fish recognition...") response = requests.get(f"https://api.fishial.ai/v1/recognition/image?q={signed_id}", headers={"Authorization": auth_header}) fish_count = len(response.json()["results"]) logging.info(f"Fishial Recognition found {fish_count} fish(es) on the picture.") if fish_count == 0: return [] species_names = [] for i in range(fish_count): fish_data = extract_from_json(f"results[{i}]", response.json()) if fish_data and "species" in fish_data: logging.info(f"Fish {i + 1} is:") for j in range(len(fish_data["species"])): species_data = fish_data["species"][j] if "fishangler-data" in species_data and "metaTitleName" in species_data["fishangler-data"]: species_name = species_data["fishangler-data"]["metaTitleName"] accuracy = species_data["accuracy"] logging.info(f" - {species_name} [accuracy {accuracy}]") species_names.append(species_name) else: logging.error(" - Species name not found in the response.") else: logging.error(f"\nFish {i + 1}: Species data not found in the response.") return species_names _P.S. This feels like it's getting a little long. If putting this much code on Pastebin is more appropriate, I'm happy to edit._ [1]: https://roboflow.ai [2]: https://fishial.ai
Creating subclasses explicitly would probably work better in your case, as you have seen. Instead of creating a class factory function, define the common behaviour of all derived classes in the base class and make specifics depend on class variables defined by subclasses. In an example which is similar to yours, this could look like: ``` class UInt: size: int # to be defined by subclasses def __init__(self, value: int): if value >= 2 ** self.size: raise ValueError("Too big") self._value = value def __repr__(self): return f"{self.__class__.__name__}({self._value})" class UInt12(UInt): size = 12 an_example_number = UInt12(508) print(an_example_number) # UInt12(508) ```
It seems you have `logo` and `stickylogo` image fields in the `$translatable` array of your model. Please remove and retry.
I am facing an issue when trying to config quarkus test running pipeline on GitLab. I usually get the error `Caused by: org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.` I used maven to execute tests And here is my `.gitlab-ci.yml`: ``` image: maven:latest variables: MAVEN_OPTS: >- -Dhttps.protocols=TLSv1.2 -Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository -Dorg.slf4j.simpleLogger.showDateTime=true -Djava.awt.headless=true MAVEN_CLI_OPTS: >- --batch-mode --errors --fail-at-end --show-version --no-transfer-progress cache: paths: - .m2/repository workflow: rules: - if: $CI_PIPELINE_SOURCE == 'merge_request_event' stages: - test test: stage: test services: - name: docker:dind alias: localhost command: [ "--tls=false" ] variables: # Instruct Testcontainers to use the daemon of DinD, use port 2375 for non-tls connections. DOCKER_HOST: "tcp://docker:2375" POSTGRES_NETWORK_MODE: "host" DOCKER_DRIVER: overlay2 script: - 'mvn test' ``` Also my application.properties file: ``` quarkus.http.root-path=/my-service/api # Database Configuration quarkus.datasource.db-kind=postgresql quarkus.datasource.devservices.enabled=true quarkus.datasource.devservices.db-name=my_db quarkus.datasource.devservices.username=admin quarkus.datasource.devservices.password=admin quarkus.datasource.devservices.port=5432 quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/my_db quarkus.datasource.jdbc.max-size=16 # Configuration for Swagger quarkus.smallrye-openapi.security-scheme=jwt quarkus.smallrye-openapi.security-scheme-name=accessToken # OIDC Configuration for Dev Profile %dev.quarkus.oidc.auth-server-url=http://localhost:8088/realms/my-realm %dev.quarkus.oidc.client-id=my-service %dev.quarkus.oidc.credentials.secret=aa12LO1abcjI6khjklSTUAZUF0xj123W %dev.quarkus.oidc.tls.verification=none # SWAGGER mp.openapi.extensions.smallrye.operationIdStrategy=PACKAGE_CLASS_METHOD ``` I tried many ways: - Create a java class test to dynamically the host implementing `QuarkusTestResourceLifecycleManager` - Add `quarkus.datasource.devservices.container-network=host` into my `application.properties` - Add `alias: localhost` into `dind` services in `.gitlab-ci.yml` - Hardcoded the host of the `jdbc` url inside `application.properties` to `docker` - Declare variable for db host and tried to use it in `application.properties` as below: `.gitlab-ci.yml`: ``` variable: TESTCONTAINERS_HOST_OVERRIDE: "host.docker.internal" ``` `application.properties`: ``` quarkus.datasource.jdbc.url=jdbc:postgresql://${TESTCONTAINERS_HOST_OVERRIDE}:5432/my_db ``` Please help me. Thank you so much for your effort.
I'm dealing with a little problem that I need to solve. I have an application in Django and I need a websocket, but I don't like Django channels, so I created a socket server in Flask and I'm trying to connect Django to it, by creating middleware that when called starts a process that sends messages through the process what it has in the queue But either the shared variable doesn't work or the process itself doesn't work properly. Please if you have any advice or solution write also how to terminate the process with socket when django terminates This is my code: ```python socket.py --- import socketio import json from project.settings import SOCKET_CONFIG, MANAGER from multiprocessing import Process sio = socketio.Client() class Socket: queue = MANAGER.list() def __init__(self, get_response): self.get_response = get_response thread = Process(target=Socket.process_loop, args=()) thread.start() def __call__(self, request): return self.get_response(request) @staticmethod def process_loop(): sio.connect('http://{}:{}'.format(SOCKET_CONFIG["host"],SOCKET_CONFIG["port"])) while True: try: import time time.sleep(1) data = Socket.queue.pop(0) sio.emit('message', data) except BaseException as e: print(e) settings.py --- import os, subprocess from multiprocessing import Process, Manager MANAGER = Manager() SOCKET_CONFIG={ "host":"localhost", "port":8001 } ... ```
Django socketio process
|python-3.x|django|websocket|
null
{"Voters":[{"Id":5389127,"DisplayName":"Gaël J"},{"Id":354577,"DisplayName":"Chris"},{"Id":874188,"DisplayName":"tripleee"}]}
I'm wondering if the SamuelSackey code has an error, in lib/supabase/server-client the code is not spreading the options. See lines 18 and 22 from the repo (https://github.com/SamuelSackey/nextjs-supabase-example/blob/main/src/lib/supabase/server-client.ts) set(name: string, value: string, options: CookieOptions) { if (component) return; cookies().set(name, value, options); }, remove(name: string, options: CookieOptions) { if (component) return; cookies().set(name, "", options); }, The Supabase SSR docs has the following get(name: string) { return cookieStore.get(name)?.value }, set(name: string, value: string, options: CookieOptions) { cookieStore.set({ name, value, ...options }) }, remove(name: string, options: CookieOptions) { cookieStore.set({ name, value: '', ...options }) }, For example (on the Supabase SSR docs) https://supabase.com/docs/guides/auth/server-side/creating-a-client?environment=route-handler His middleware code also looks abbreviated, here are the supabase docs for middleware https://supabase.com/docs/guides/auth/server-side/creating-a-client?environment=middleware, note the get, set, and remove code is missing from his middleware.ts. When calling the ssr CreateServerClient, the third argument is the cooking handling (cookies and cookieOptions), declare function createServerClient<Database = any, SchemaName extends string & keyof Database = 'public' extends keyof Database ? 'public' : string & keyof Database, Schema extends GenericSchema = Database[SchemaName] extends GenericSchema ? Database[SchemaName] : any>(supabaseUrl: string, supabaseKey: string, options: SupabaseClientOptions<SchemaName> & { cookies: CookieMethods; cookieOptions?: CookieOptionsWithName; }): _supabase_supabase_js.SupabaseClient<Database, SchemaName, Schema>;
You can easily split this CSV using .iloc function so first read your CSV as dataframe: import pandas as pd my_csv = pd.read_csv('my_csv.csv') create 2 variables with where you will define parts of dataframe as new dataframe eg.: df1 = my_csv[:3] df2 = my_csv[5:] if columns are not red well, you need to assign column names separately like: df1.columns = ['No. Device', 'Version', 'Readout date', 'Readout time', 'Value 1', 'Value 2', 'Value 3'] Hope it helps
While the form is open in the Designer go to menu Tools > Form Editor > Form Settings... In new dialog check "ID-based" check box. When you rebuild the project all `QCoreApplication::translate` calls in `retranslateUi` will be changed to `qtTrId`. Also, all parameters in those calls will reset to empty string. If you already changed some texts to ID strings those changes will disappear. Save them somewhere before you do all this.
I have been using Azure containers for my project . I'm using different containers for backend app and frontnend. i have created a sidekiq process enclosed in radis cahce server on azure. as i trigger a task to run sidekiq process , sidekiq container is terminating shortly after starting.
my Azure sidekiq process is terminating right after starting?
|azure|sidekiq|
null
You can customize the [`hoverinfo`][1] property (like for `textinfo`, don't have to rewrite `hovertemplate` entirely). The default value is `'all'`, which corresponds to the flaglist : 'x+y+text+percent initial+percent previous+percent total' You want to remove the `percent total` flag, eg. ```lang-py fig = go.Figure( go.Funnel( y = ["Website visit", "Downloads", "Potential customers", "Requested price", "Finalized"], x = [39, 27.4, 20.6, 11, 2], textposition="auto", textinfo="value+percent initial+percent previous", hoverinfo='x+y+text+percent initial+percent previous', # opacity=0.65, marker={ "color": [ "#4F420A", "#73600F", "#947C13", "#E0BD1D", "#B59818", "#D9B61C", ], "line": { "width": [4, 3, 2, 2, 2, 1], "color": ["wheat", "wheat", "wheat", "wheat"], }, }, connector={"line": {"color": "#4F3809", "dash": "dot", "width": 3}}, ) ) ``` [1]: https://plotly.com/python/reference/funnel/#funnel-hoverinfo
# 2 search bars with dropdown suggestions, selectable by both mouse and arrow keys (TypeScript) ```css li.cursor { background: yellow; } #Item\ 1, #Item\ 2 { font-weight: bold; font-size: 40px; } ``` ```tsx import { useState } from "preact/hooks"; const list1 = ['rabbits', 'raccoons', 'reindeer', 'red pandas', 'rhinoceroses', 'river otters', 'rattlesnakes', 'roosters'] as const const list2 = ['jacaranda', 'jacarta', 'jack-o-lantern orange', 'jackpot', 'jade', 'jade green', 'jade rosin', 'jaffa']; /** Cursor is null when mouse leaves */ type Cursor = null | number; export default function SearchBar() { const [resultList1, setResultList1] = useState<null | (typeof list1[number])[]>(null); const [cursor1, setCursor1] = useState<Cursor>(0); const [selectedItem1, setSelectedItem1] = useState<null | typeof list1[number]>(null); // State to track selected item for list 1 const [resultList2, setResultList2] = useState<null | (typeof list2[number])[]>(null); const [cursor2, setCursor2] = useState<null | number>(0); // State to track selected item for list 2 const [selectedItem2, setSelectedItem2] = useState<null | typeof list2[number]>(null); // State to track selected item for list 2 /** activeList is used to determine whether the suggestion list should be popup or not */ const [activeList, setActiveList] = useState<null | '1' | '2'>(null); // State to track active list function handleKeyDown (e: KeyboardEvent) { let cursor: number | null let resultList: (typeof list1[number])[] | (typeof list2[number])[] | null let setSelectedItem: typeof setSelectedItem1 | typeof setSelectedItem2 let setCursor: typeof setCursor1 | typeof setCursor2 switch (activeList) { case '1': cursor = cursor1 resultList = resultList1 setCursor = setCursor1 setSelectedItem = setSelectedItem1 break; case "2": cursor = cursor2 resultList = resultList2 setCursor = setCursor2 setSelectedItem = setSelectedItem2 break; default: return } if (!resultList) return if (e.key === 'ArrowDown') { const newCursor = Math.min(cursor! + 1, resultList.length - 1); setCursor(newCursor); } else if (e.key === 'ArrowUp') { const newCursor = Math.max(0, cursor! - 1); setCursor(newCursor); } else if (e.key === "Enter" && cursor) { setSelectedItem(resultList[cursor]) } }; function SuggestedList() { let cursor: number | null let resultList: (typeof list1[number])[] | (typeof list2[number])[] | null let setSelectedItem: typeof setSelectedItem1 | typeof setSelectedItem2 let setCursor: typeof setCursor1 | typeof setCursor2 switch (activeList) { case '1': cursor = cursor1 resultList = resultList1 setCursor = setCursor1 setSelectedItem = setSelectedItem1 break; case "2": cursor = cursor2 resultList = resultList2 setCursor = setCursor2 setSelectedItem = setSelectedItem2 break; default: return } const id = `Suggested list ${activeList}` return <ul id={id} className='active'> {resultList!.map( (item, index) => ( <li key={index} className={cursor === index ? 'cursor' : null} onClick={() => { setSelectedItem(item); console.log(item); }} onMouseEnter={() => setCursor(index)} onMouseLeave={() => setCursor(null)} >{item}</li> ) )} </ul>; } return ( <> <div id='search-div-1' className="search-bar-container" > <input type="text" placeholder={'Search list 1'} onInput={(e) => { setResultList1(list1.filter(item => item.includes((e.target as HTMLTextAreaElement).value))); }} onFocus={() => setActiveList('1')} onKeyDown={(e) => handleKeyDown(e)} /> <br /> {resultList1 && activeList === '1' ? SuggestedList() : null} Cursor: {cursor1}<br /> Selected item: <span id="Item 1">{selectedItem1}</span><br /> </div> <div id='search-div-2' className="search-bar-container" > <input type="text" placeholder={'Search list 2'} onInput={(e) => { setResultList2(list2.filter(item => item.includes((e.target as HTMLTextAreaElement).value))); }} onFocus={() => setActiveList('2')} onKeyDown={(e) => handleKeyDown(e)} /> <br /> {resultList2 && activeList === '2' ? SuggestedList() : null} Cursor: {cursor2}<br /> Selected item: <span id="Item 2">{selectedItem2}</span><br /> </div> Active list: <strong>{activeList}</strong> </> ); } ``` ![](https://i.imgur.com/GFpNcMo.gif)
I have a problem with ng-select (I am using Angular 10). I Want To Bind Value Of Dropdown But I Get Undefined Also Got Problems With Clear Form (Clear Function Is Not Working On This Field) <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> export class CollectionmanagementComponent implements OnInit { selectedConnection: any; onSubmit() { this.IsSubmitted = true; this.modelValue = 'modal'; if (this.collectionManagementForm.invalid) { this.modelValue = ''; return; } if (typeof (this.selectedCollectionsTypes) === 'undefined') { this.selectedCollectionsTypes = ''; } let body = { ConnectionType: (this.selectedConnection.value) ? this.selectedConnection.value : '' }; if (body.CompanyID) { this.accoutingService.updateCollectionsManagement(body, true).subscribe((res) => { let data: any = res; if (data.response != null && data.response.length > 0) { this.restForm(); this.collectionManagementList = data.response; this.showNotificationOnSucess(data); this.getAllCollectionsManagement(); this.ViewEditModalClose.nativeElement.click(); } else { this.showNotification(data); } }, (err: any) => { this.showError(err); }); } else { this.accoutingService.addCollectionsManagement(body, true).subscribe((res) => { let data: any = res; if (data.response !== null && data.responseCode === 200) { this.collectionManagementList = data.response; this.restForm(); this.showNotificationOnSucess(data); this.getAllCollectionsManagement(); this.ViewEditModalClose.nativeElement.click(); } else { this.showNotification(data); } }, (err: any) => { this.showError(err); }); } } restForm() { this.popUpTittle = 'Add'; this.IsSubmitted = false; document.getElementById('btnUpdate').innerText = 'Add'; this.collectionManagementForm.reset(); this.clearForm(); } clearForm() { this.collectionManagementForm = this.fb.group({ ConnectionType: [''] }); } } <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-html --> <div class="form-group"> <div class="theme-label">Connection Type</div> <div class="input-group"> <ng-select [items]="selectedConnection" [closeOnSelect]="true" bindValue="id" placeholder="Select" bindLabel="name" formControlName="ConnectionType" > <ng-template let-items="items" let-clear="clear"> <div class="ng-value" *ngFor="let item of items"> <ng-option [items]="null" selected>{{item.name}}</ng-option> </div> </ng-template> </ng-select> </div> </div> <!-- end snippet -->
Is it possible to represent -3/32 as a binary floating-point value using only 7 bits
|floating-point|binary|
The below seems to provide a viable alternative version. Two parts: 1. Simplify code 2. Discuss, but not resolve, occasional occurrence of the specific error mentioned The simplified code below seems to provide expected response for the 'hotel' tag; and for the 'motel' tag only a future warning -- "osmnx/features.py:1030: FutureWarning: <class 'geopandas.array.GeometryArray'>._reduce will require a `keepdims` parameter in the future gdf.dropna(axis="columns", how="all", inplace=True)". ```Python def fetch_tourism_features(tag): cities = ['Aarau', 'Acquarossa'] tags = {"tourism" :f'{tag}'} gdf = ox.features_from_place([{"city": city, "country": "Switzerland", "countrycodes": "ch"} for city in cities], tags) return gdf hotels = fetch_tourism_features('hotel') motels = fetch_tourism_features('motel') ``` 1. Simplify code: As shown in the OSMnx example notebook: ['Download Any OSM Geospatial Features with OSMnx']( https://github.com/gboeing/osmnx-examples/blob/main/notebooks/16-download-osm-geospatial-features.ipynb) one can use directly 'ox.features_from_place()' to fetch OSM features as in its example: ```Python # get everything tagged amenity, # and everything tagged landuse = retail or commercial, # and everything tagged highway = bus_stop tags = {"amenity": True, "landuse": ["retail", "commercial"], "highway": "bus_stop"} gdf = ox.features_from_place("Piedmont, California, USA", tags) gdf.shape ``` 2. As for the error message, it's listed in the OSMnx (1.9.1 Internals Reference' [osmnx._errors module](https://osmnx.readthedocs.io/en/stable/internals-reference.html#osmnx-errors-module) -- and does occur in some queries (just a few ad hoc ones tested; not enough to look for any pattern): ``` exception osmnx._errors.InsufficientResponseError Exception for empty or too few results in server response. ``` Note: Initial inspiration for using dictionary comprehensions directly in OSMnx queries: https://stackoverflow.com/a/71278021 Anyway hope this is of some help
Reactjs' delete keyword is an interesting one. My lint complains when I use it. When you call `delete new_state[curr_counter]` inside the `set_visuals` callback, you're directly modifying the existing object in the state. React expects state updates to be replacements with entirely new objects. Since you're modifying the same object in `set_visuals`, when you delete the element at `curr_counter`, all references to that object, including those in `visual_ref.current`, are also affected. By deleting the element from the state, you're deleting it from memory, causing `visual_ref.current[lastIndex]` to point to a non-existent element. Instead of deleting the element directly, create a new object without the element you want to remove. The ... spread operator would help. If the `visuals` state is an array, you can directly manipulate it using the filter method. I hope this helps.
I'm reading in serial data from a serial USB device with python. I want to compare the received data with a predefined string if it matches. Reading the data works fine the problem is this if-statement: ``` if serialInst.in_waiting: startPacket = serialInst.readline() z = "iohioowwcncuewqrte" k = startPacket.decode('utf').rstrip('\n') if (k == z): print("right passkey") else: print("Error") exit() ``` Thank you in advance.
I am creating a memoization example with a function that adds up / averages the elements of an array and compares it with the cached ones to retrieve them in case they are already stored. In addition, I want to store only if the result of the function differs considerably (passes a threshold e.g. 5000 below). I created an example using a decorator to do so, the results using the decorator is slightly slower than without the memoization which is not OK, also is the logic of the decorator correct ? My code is attached below: import time import random from collections import OrderedDict def memoize(f): cache = {} def g(*args): if args[1] == 'avg': sum_key_arr = sum(args[0])/ len(list(args[0])) elif args[1] == 'sum': sum_key_arr = sum(args[0]) print(sum_key_arr) if sum_key_arr not in cache: for key, value in OrderedDict(sorted(cache.items())).items():# key in dict cannot be an array so I use the sum of the array as the key if abs(sum_key_arr - key) <= 5000:#threshold is great here so that all values are approximated! #print('approximated') return cache[key] else: #print('not approximated') cache[sum_key_arr] = f(args[0],args[1]) return cache[sum_key_arr] return g #@memoize def aggregate(dict_list_arr,operation): if operation == 'avg': return sum(dict_list_arr) / len(list(dict_list_arr)) if operation == 'sum': return sum(dict_list_arr) return None t = time.time() for i in range(200,150000): res = aggregate(list(range(i)),'avg') elapsed = time.time() - t print(res) print(elapsed)
Memoization yields slower results
|python|dictionary|python-decorators|memoization|
specific_value is not matching i believe while iterating the child elements. Hence you are not able to click the color. Try to print the child.text data and verify the specific value manually. another solution use child.get_attribute('innerText') to get child text data.
# Material-UI Data-Grid v7 import { styled } from '@mui/system'; import { DataGridPro } from '@mui/x-data-grid-pro'; function DataGridGeneric(props) { return ( <CustomDataGridPro disableColumnMenu {...props} /> ); } export default DataGridGeneric; const CustomDataGridPro = styled(DataGridPro)(({ theme }) => ({ '& .MuiDataGrid-columnSeparator': { display: 'none' } })); in v7 works for me.
I have a geometric dataset of point features associated with values. Out of ~ 16000 values, about 100-200 have NaNs. I'd like to populate those with the average of the values from the 5 nearest neighbors, assuming at least 1 of them is not also associated with a NaN. The dataset looks something like: ``` FID PPM_P geometry 0 0 NaN POINT (-89.79635 35.75644) 1 1 NaN POINT (-89.79632 35.75644) 2 2 NaN POINT (-89.79629 35.75644) 3 3 NaN POINT (-89.79625 35.75644) 4 4 NaN POINT (-89.79622 35.75644) 5 5 NaN POINT (-89.79619 35.75644) 6 6 NaN POINT (-89.79616 35.75644) 7 7 NaN POINT (-89.79612 35.75645) 8 8 NaN POINT (-89.79639 35.75641) 9 9 40.823028 POINT (-89.79635 35.75641) 10 10 40.040865 POINT (-89.79632 35.75641) 11 11 36.214436 POINT (-89.79629 35.75641) 12 12 34.919571 POINT (-89.79625 35.75642) 13 13 NaN POINT (-89.79622 35.75642) 14 14 NaN POINT (-89.79619 35.75642) 15 15 NaN POINT (-89.79615 35.75642) 16 16 NaN POINT (-89.79612 35.75642) 17 17 NaN POINT (-89.79609 35.75642) 18 18 NaN POINT (-89.79606 35.75642) 19 19 NaN POINT (-89.79642 35.75638) ``` It just so happens that many of the NaNs are near the beginning of the dataset. I found the nearest neighbor weight matrix using: ``` w_knn = KNN.from_dataframe(predictions_gdf, k=5) ``` Now I'm not sure what to do. Can someone give me a hand please?
Cannot connect to Postgres Database when running Quarkus Tests with Gitlab ci
|database|docker|continuous-integration|gitlab-ci|quarkus|
null
I'm trying to get the id of the last element in my json file through an api to be able to add an element without a random id , i want it to be like 1,2,3,4 etc... but i can't figure out how (i wana do that bc i can't use an id with letrers ). Thank you in advance.
Trying to get the id of the last element in my json file through an api
|javascript|json|api|
null
{"Voters":[{"Id":269970,"DisplayName":"esqew"},{"Id":286934,"DisplayName":"Progman"},{"Id":2494754,"DisplayName":"NVRM"}],"SiteSpecificCloseReasonIds":[]}
{"Voters":[{"Id":5648954,"DisplayName":"Nick Parsons"},{"Id":1599751,"DisplayName":"PeterJ"},{"Id":16540390,"DisplayName":"jabaa"}],"SiteSpecificCloseReasonIds":[19]}
I am referring to the question from https://stackoverflow.com/q/64034813/23914212 as my VS code does not connect to host. There one suggestion is to delete the bin folder in vscode-server. Unfortunately, I am unable to find it anywhere on my computer. Is there any specific file within the VS code window itself where I could perhaps find the pwd to the folder? I tried Get-ChildItem -Recurse | Where-Object { $_.Name -eq ".vscode-server" } in VS code and dir /s /b .vscode-server in my command prompt both gave me nothing.
I cannot find vscode-server for VS code
|vscode-server|
null
Your diagnosis of the first result is correct. The second is a result of the behaviour of aggregating functions: `collect()` returns one row even when there are no incoming rows. See [this article][1] for more information. [1]: https://neo4j.com/developer/kb/understanding-aggregations-on-zero-rows/
I have to write sudo before running any npm command in my VS code terminal in MacBook Air (OS: macOS Sonoma 14.4). Example: sudo npm install sudo npc create-react-app my app If I didn't "sudo" the command will not execute, while others users are doing without writing sudo. FYI: I have node.js v20.10.0 and npm version 10.2.4 installed globally in my system.
How to fix npm errors without writing sudo in macOS?
|node.js|npm|terminal|node-modules|sudo|
I have a small project only have : 1) implement simple rest api with com.sun.net.httpserver.HttpServer 2) Hibernate + JPA as you know , hibernate does not support nested transactions, and i won't use frameworks like `Spring` or `jakarta/javaEE` . I want to know : Is this good idea to pass EntityManager as parameter to CRUD layer classes ? ``` public class MyCRUD { public static void save(FirstEntity fe, EntityManager em) { em.persist(fe); } public static void save(SecondEntity se, EntityManager em) { em.persist(se); } } public class MyService { public static void doSomething(....) { // Business logic // Insert into db // ... EntityManagerFactory emf = Persistence.createEntityManagerFactory(UNIT_NAME); try (EntityManager em = emf.createEntityManager()){ EntityTransaction tx = em.getTransaction(); tx.begin(); MyCRUD.save(fe, em); MyCRUD.save(se, em); tx.commit(); } } } ``` is There any way to use EntityManager inside CRUD Layer (without pass as parameter) and handle transactions on Service layer ?
How do I do a proper if statement with data from a serial device in python?
|python|pyserial|
null
I've spent the last few days on this, but I can't figure out how to calculate account returns using AcctReturns and PortfReturns in blotter in a way that the results makes sense. This is the code: ``` library(xts) library(blotter) library(quantmod) library(quantstrat) library(PerformanceAnalytics) rm(list = ls()) .blotter <- new.env() .strategy <- new.env() inicio_account <- "2021-02-10" inicio_portfolio <- "2021-02-10" nome_account <- "Igor_Account" nome_portfolio <- "Igor_Portfolio" tickers <- "NVDA" getSymbols("NVDA", from = inicio_account, to = "2024-03-26", src = "yahoo", adjust = TRUE) currency("USD") stock("NVDA", currency = "USD", multiplier = 1) ############################################################### initPortf(nome_portfolio, symbols = tickers, initPosQty = 0, initDate = inicio_portfolio) initAcct(nome_account, portfolios = nome_portfolio, initDate = inicio_account, initEq=152.1574) ############################################################### addTxn(nome_portfolio, "NVDA", "2021-02-11", TxnQty = 1, TxnPrice = 152.1574, verbose = TRUE) #addAcctTxn(nome_account, as.Date("2023-08-07"), TxnType = "Additions", Amount = 100) #addAcctTxn(nome_account, as.Date("2023-08-08"), TxnType = "Withdrawals", Amount = -100) #addTxn(nome_portfolio, "NVDA", "2024-03-25", TxnQty = -1, TxnPrice = 950.0200, verbose = TRUE) ############################################################### # Update everything updatePortf(nome_portfolio) updateAcct(nome_account) updateEndEq(nome_account) # Get the returns p <- PortfReturns(Account = nome_account, Portfolio = nome_portfolio) a <- AcctReturns(Account = nome_account) # Calculate regular and log returns from stock data t <- NVDA[-1] ta <- t[,6] ativo <- Return.calculate(ta, method = "discrete") ativoret <- Return.cumulative(ativo, geometric = FALSE) ativorets <- ativoret[1,1] ativolog <- Return.calculate(ta, method = "discrete") ativologret <- Return.cumulative(ativolog, geometric = TRUE) ativologrets <- ativologret[1,1] # Calculate cumulative portfolio and account returns a <- a[-1] pret <- Return.cumulative(p, geometric = FALSE) prets <- pret[1,1] pret_log <- Return.cumulative(p, geometric = TRUE) prets_log <- pret_log[1,1] aret <- Return.cumulative(a, geometric = FALSE) arets <- aret[1,1] aret_log <- Return.cumulative(a, geometric = TRUE) arets_log <- aret_log[1,1] # Print the results that we have #print(paste0("Stock Returns: ", round(ativorets,5)*100,"%")) print(paste0("Stock Log Returns: ", round(ativologrets,5)*100,"% <- Buy at 152.1574, Sell at 950.02, 524.367% is OK!")) print(paste0("Portfolio Returns: ", round(prets,5)*100,"% <- Also OK!")) #print(paste0("Portfolio Log Returns: ", round(prets_log,5)*100,"% <- ??")) print(paste0("Account Returns: ", round(arets,5)*100,"% <- 586.069%? Why is not 524.366%?")) print(paste0("Account Log Returns: ", round(arets_log,5)*100,"% <- 4.686%? What?")) ``` [enter image description here](https://i.stack.imgur.com/sH8rt.png) I've been working on developing a portfolio tracking tool to monitor the returns for my clients' portfolios/accounts. I started with a simple model focusing on a single-stock portfolio to verify if the results align with my manual calculations. This initial step is crucial for me as I plan to gradually refine the tool to accurately track returns for both stock and futures trading within the same account. However, I've encountered an issue where the calculated portfolio returns accurately reflect the single-stock returns, but the account returns deviate significantly. In theory, for a single-stock portfolio, the stock returns, account returns, and portfolio returns should be identical, yet they aren't aligning as expected. If the account returns are supposed to be combined with PortfReturns to achieve the correct results, I'm unsure how to proceed. Despite numerous attempts, including adjusting dates, conducting manual recalculations, experimenting with different stocks, and exploring various types of returns, the issue persists. If anyone could share a light, that would be very helpful.
Problem calculating account returns (AcctReturns) in Blotter