instruction
stringlengths
0
30k
|python|r|statistics|scipy.stats|
null
I'm trying to make a slider that displays 4 horizontal images at a time and each image is hyperlinked. Adding hyperlinks to it breaks the code. I added "a" tags to the CSS and expected it to continue working, but it made the slides vertical and displayed all of them, instead of just 4 at a time. Could there be a problem with the js? I've added the css portion below because I think that is the problem. The full code is in the jsfiddle link below. Essentially i'm trying to make something that appears similar to the netflix title preview carousel. Here is my jsfiddle link: https://jsfiddle.net/Kronos11/s9uqjy6p/ ```` css .container a { display: flex; justify-content: center; overflow: hidden; } .slider a { --items-per-screen: 4; --slider-index: 0; display: flex; flex-grow: 1; margin: 0 var(--img-gap); transform: translateX(calc(var(--slider-index) * -100%)); transition: transform 250ms ease-in-out; } .slider > a img { flex: 0 0 calc(100% / var(--items-per-screen)); max-width: calc(100% / var(--items-per-screen)); aspect-ratio: 16 / 9; padding: var(--img-gap); border-radius: 1rem; } ```
Adding href tag to image breaks carousel?
|java|html|css|wordpress|slider|
null
As mentioned by [Santiago Squarzon](https://stackoverflow.com/users/15339544/santiago-squarzon) in his helpful answer Object Graphs (resulted from cmdlets along with `ConvertFrom-Json`) could be quite difficult to handle. That's why I have been busy for the last months to create some [**Object Graph Tools**](https://github.com/iRon7/ObjectGraphTools) that might ease some common use cases as this one. To install the `ObjectGraphTools` module: Install-Module -Name ObjectGraphTools Underneath the cmdlets is a [`[PSNode]`](https://github.com/iRon7/ObjectGraphTools/blob/main/Docs/ObjectParser.md) class that gives you access to the nodes in the object graph. E.g. to retrieve all the leaf nodes with the name "`name`": $Object = $JsonData | ConvertFrom-Json $NameNodes = $Object | Get-ChildNode -Recurse -Include 'Name' -Leaf $NameNodes PathName Name Depth Value -------- ---- ----- ----- .Team1.'John Smith'.employees[0].name name 5 John Doe .Team1.'John Smith'.employees[1].name name 5 Jane Vincent .Team1.'Jane Smith'.employees[0].name name 5 John Bylaw .Team1.'Jane Smith'.employees[1].name name 5 Jane Tormel .Team2.'Bob Smith'.employees[0].name name 5 Bob Doe .Team2.'Bob Smith'.employees[1].name name 5 Margareth Smith .Team2.'Mary Smith'.employees[0].name name 5 Henry Bylaw .Team2.'Mary Smith'.employees[1].name name 5 Eric Tormel The `PathName` property holds the path to the specific property (e.g.: `$Object.Team1.'John Smith'.employees`) in the object graph. * Type `$EmployeeNodes | Get-Member` to show more members of the `[PSNode]` class. * For help on the cmdlets as [`Get-ChildNode`](https://github.com/iRon7/ObjectGraphTools/blob/main/Docs/Get-ChildNode.md) type `Get-ChildNode -?` or refer to the [online documents](https://github.com/iRon7/ObjectGraphTools/tree/main/Docs). To get the specific `$NameNode` with the value property name 'Henry Bylaw': $HenryBylawNode = $NameNodes | Where-Object Value -eq 'Henry Bylaw' $HenryBylawNode PathName Name Depth Value -------- ---- ----- ----- .Team2.'Mary Smith'.employees[0].name name 5 Henry Bylaw To get the `Position`, `Manager` and `Team`: $HenryBylawNode.ParentNode.GetChildNode('Position').Value Clerk $HenryBylawNode.ParentNode.Parentnode.Name Mary Smith $HenryBylawNode.ParentNode.Parentnode.Parentnode.Name Team2 Putting it together: $JsonData | ConvertFrom-Json | Get-ChildNode -Recurse -Include 'Name' -Leaf | Where-Object Value -eq 'Henry Bylaw' | ForEach-Object { [pscustomobject]@{ Employee = $_.Value Position = $_.ParentNode.GetChildNode('Position').Value Team = $_.ParentNode.ParentNode.Parentnode.Parentnode.Name Manager = $_.ParentNode.ParentNode.Parentnode.Name } } Employee Position Team Manager -------- -------- ---- ------- Henry Bylaw Clerk Team2 Mary Smith --- **Update 2024-03-08** Based on the new published [ObjectGraphTools](https://github.com/iRon7/ObjectGraphTools) version that includes a full [Extended Dot Notation (Xdn)](https://github.com/iRon7/ObjectGraphTools/blob/main/Docs/Xdn.md) implementation, you might use the following simplified syntax: $Name = 'Eric Tormel' $Json | ConvertFrom-Json | Get-Node ~Name="$Name" | ForEach-Object { [PSCustomObject]@{ Employee = $_.Value Position = $_.GetNode('....Position').Value Team = $_.GetNode('.....').Name Manager = $_.GetNode('....').Name } }
You are not using the && in the right place. If &J=1 and want the value of YY1 then you should do: %let year_x = &&yy&j; The first pass will convert && into & and &J into 1 resulting in &yy1. Which the second pass will convert to the value of YY1. But you are making it much too hard. If the goal is to select by YEAR and MONTH then construct the inner macro in that way. %MACRO MATS(year, month, NUM); PROC SQL; CREATE TABLE TEST&num. AS SELECT PRODUCT_EXPIRY_DATE , COUNT(*) FROM DATA.AGGREGATE_ACCOUNTS_&year.&month. WHERE YEAR(PRODUCT_EXPIRY_DATE) = &year AND MONTH(PRODUCT_EXPIRY_DATE) = &month. ; QUIT; %MEND; Then you can call the macro once for each value of YYMM data _null_; set USE_DATES; call execute(cats('%nrstr(%MATS)(',yy,',',mm,',',_n_,')')); run;
My Solution try { presenter = new NonioApplicationPresenterImpl(this); masterHelper = presenter.getDevOpenHelper(); sqLiteDatabase = presenter.getWritableDatabase(); daoMaster = presenter.getDaoMaster(); daoSession = presenter.getNewSession(); } catch (SQLiteException sqLiteException) { deleteDatabase(NonioApplicationPresenter.DATABASE_NAME); }
I can pass a template parameter to an object's member function, but not to the object itself. Is this possible at all? #include <iostream> #include <string> using namespace std; class Logger { public: Logger& operator()(int t) { cerr << "(tag " << t << ") "; return *this; } Logger& operator<<(const string &s) { cerr << s << endl; return *this; } template<int t> Logger& Tag() { cerr << "(tag " << t << ") "; return *this; } Logger() {} template<int t> Logger() { cerr << "(tag " << t << ") "; } }; int main() { { Logger log; log(2) << "World"; log(3) << "World"; log.Tag<2>() << "Template 1"; log<2> << "Template 2"; // <-- error } } This is the error message (from GCC 13.2): error: invalid operands to binary expression ('Logger' and 'int') log<2> << "Template 2"; ~~~^~
Yes, you can [add scopes to `has_many` associations](https://guides.rubyonrails.org/association_basics.html#scopes-for-has-many). But you have to add the class name to the association too, because Ruby on Rails is not able to guess it from the name anymore. class Playlist has_many :tracks has_many :five_start_tracks, -> { where(rating: 5) }, class_name: 'Track' has_many :long_tracks, -> { where('duration_seconds > ?', 600) }, class_name: 'Track' end Or [`with_options`](https://api.rubyonrails.org/classes/Object.html#method-i-with_options) which might increase readability when you define longer lists of specialized associations: class Playlist has_many :tracks with_options class_name: 'Track' do has_many :five_start_tracks, -> { where(rating: 5) } has_many :long_tracks, -> { where('duration_seconds > ?', 600) } end end
Plotting categorical covariate against occupancy using unmarked package
|ggplot2|boxplot|categorical-data|unmarked-package|
null
Not a reply intended for the op but as an alternative method to anyone else looking into this topic... You CAN use json_decode() on ANY size file with next to no memory use. Yepp, the best of both worlds. I tried several solutions such as jsonmachine and json_decode as they were designed where some methods were fast digesting the entire file at once with a memory crash while others completed but were painfully slow. My solution is to break apart the json file into smaller sections and process each with json_decode(). I did this by setting the head and the end of the json file to variables (or constants), then concatenating head + body excerpt + end and processing each batch separately, where the body excerpt was 200-400 records but can be anything the system can handle. I am sure some people will have something negative to say about this but in essence it would be the same as manually making many small json files and processing them individually. This method simply does it for you, relatively fast and can handle a file of literally any size. My sample file had 1,177,437 records (3.8GB) that involved several operations to prepare the data such as many coordinate conversions, string manipulations, sql queries to retrieve additional data to be included and gz_deflate(). It created sql statements that were queried and completed in 37 min with no errors averaging 530 sql records created per second. The table ended up being 5.2 GB when said and done. If you know that the file(s) will be formatted 100% correctly this can be sped up by reading an entire line opposed to 1 character at a time. I opted for 1 character at a time because on occasion I get geojson files with no line breaks and I designed it for maximum compatibility first, speed second. Tips: I found that preg_match() worked well to extract the head of the file while simply looking for an equal quantity of opening and closing curly brackets within a string indicated a complete record. The end of the file was a simple "\n]\n}\n" that I hard coded because it is common to all files.
TLDR: Use `@WebMvcTest` Your test is all wrong. There are 2 `UsersController` instances in your test. - One is created via SpringBootTest which brings up entire application context - controllers, services etc. Some beans are overriden via `@MockBean`. This controller uses your mock service. - The second one is created by Mockito and `@InjectMocks`. `@MockBean` are not injected into this service. There are also 2 `MockMvc` instances: - One created via `@AutoConfigureMockMvc` and injected via `@Autowired` - Second created manually in your test method In your test method you are using manually created `MockMvc` which makes requests to `UsersController` created by Mockito, which is not properly initialized - and you get NPE. **Solution** Use only one controller and one mockMvc. However you can improve the test even further. You don't need entire spring context as: - you are trying to test web layer of your app - you are testing only one controller - the services it uses are mocks This is an ideal use case for `@WebMvcTest`. - Replace `@SpringBootTest` with `@WebMvcTest` - `@AutoConfigureMockMvc` is imported via `@WebMvcTest`, so remove it as well - Remove `@ExtendWith(MockitoExtension.class)` - Do not create your own `MockMvc` See: - [Testing the Web Layer](https://spring.io/guides/gs/testing-web) in Spring Docs - [8. Unit Testing With @WebMvcTest](https://www.baeldung.com/spring-boot-testing#unit-testing-with-webmvctest) in Baeldung tutorials
SSL error when redirecting from one lightsail subdomain to lightsail subdomain on different account
|ssl|ssl-certificate|
null
When running expo go with my react native project- which i've been working on for some months now with no issues like this before, everything works as it should until I render a page that fetches some information from my DynamoDB database. Expo go crashes and there is no error message anywhere. ``` import { API } from "aws-amplify"; import { Alert } from "react-native"; import { getItem } from "../graphql/queries"; import { getUser } from "../graphql/queries"; import { useAuth } from "../hooks/useAuth"; export const fetchItem = async (itemId) => { const auth = useAuth() try { // ######################################################### const response = await API.graphql({ query: getItem variables: { id: itemId }, authMode: "AMAZON_COGNITO_USER_POOLS", }); console.log(response) // ######### Inversely comment above and below ############# // const response = await API.graphql({ // query: getUser, // variables: { // id: auth.dynamoUser.id // }, // authMode: "AMAZON_COGNITO_USER_POOLS", // }); // console.log(response) // ######################################################### // Note - breaks when trying to getItem but works when we use getUser. // expo crashes with no error codes -_- } catch (err) { Alert.alert("Error getting item information:", err); } }; ``` Above you can see, I'm trying to fetch data for a specific item in my Items table in DynamoDB. I've narrowed the problem down to the following sectioned out code where the uncommented portion is what is causing my app to crash, but the commented portion works. I tested my GraphQL queries in Appsync and both these query functions work fine. I have another table called Events, and when I also try to getEvent, this also breaks. But again, getUser(which is also a table in DynamoDB) works fine. Below is my versions: ``` "dependencies": { "@aws-amplify/cli": "^12.8.2", "@aws-amplify/react-native": "^1.0.3", "@aws-amplify/rtn-web-browser": "^1.0.3", "@gorhom/bottom-sheet": "^4.6.0", "@react-native-async-storage/async-storage": "1.18.2", "@react-native-community/masked-view": "^0.1.11", "@react-native-community/netinfo": "9.3.10", "@react-native-community/slider": "4.4.2", "@react-navigation/bottom-tabs": "^6.5.11", "@react-navigation/material-top-tabs": "^6.6.5", "@react-navigation/native": "^6.1.9", "@react-navigation/stack": "^6.3.20", "aws-amplify": "5.3.12", "expo": "~49.0.15", "expo-font": "~11.4.0", "expo-haptics": "12.4.0", "expo-linear-gradient": "~12.3.0", "expo-modules-autolinking": "^1.5.1", "expo-status-bar": "~1.6.0", "react": "18.2.0", "react-native": "0.72.10", "react-native-gesture-handler": "~2.12.0", "react-native-pager-view": "6.2.0", "react-native-reanimated": "3.3.0", "react-native-safe-area-context": "4.6.3", "react-native-screens": "~3.22.0", "react-native-svg": "13.9.0", "react-native-tab-view": "^3.5.2" }, "devDependencies": { "@babel/core": "^7.20.0", "@babel/plugin-proposal-nullish-coalescing-operator": "^7.0.0-0", "@babel/plugin-proposal-optional-chaining": "^7.0.0-0", "@babel/plugin-transform-arrow-functions": "^7.0.0-0", "@babel/plugin-transform-shorthand-properties": "^7.0.0-0", "@babel/plugin-transform-template-literals": "^7.0.0-0", "@babel/preset-env": "^7.1.6", "react-native-svg-transformer": "^1.3.0" },```
We have built several solutions for clients based on azure functions and servicebus to move data between systems. Each solution hosts many (100+) individual functions triggering on individual topics from many different systems. Although some solutions are older, the majority are .net 8 isolated functions and a solution should be based on that. Lately we have experienced issues with a system that is unable to process the amount of data and we therefor need to throttle requests. For throttling solutions we are debating a few options as outlined below. I would like input on the below options as well as suggestions to handle it differently. 1. Have trigger functions write to a new queue and build an entirely new function app with the specific purpose of throttling to one specific system. Scale out and max concurrency settings can be used to achieve the throttling level desired. - This was my initial idea which I still rather like. Biggest concerns are double queue/function processing and single queue for a lot of messages which may become problematic. 2. same as 1, but use auto-forwarding of the servicebus - Seems to be a better version of 1, but without being able to manipulate messages before 'fanning in' 3. Build custom azure function trigger that consults with a distributed lock or similar before fetching messages from the queue. - This would be great, but I would really prefer to leverage the existing trigger already built by Microsoft rather than implement my own. If I could add to it, that would be great. 4. Use a distributed lock after receiving messages from the servicebus and throw them back on the queue if the lock is unachievable. - I really don't like this one. It spends lots of resources (and money due to consumption/premium plans) to do nothing. As i said. These are the solutions we have been talking about, but I would be very happy to be presented with better (simpler/easier) solutions for throttling.
Throttling consumption of Servicebus messages with azure functions
|c#|azure-functions|azureservicebus|throttling|
null
I have use tinyMCEeditor CDN in my laravel application so i want to know, where the images uploaded in tinymce are stored? file_picker_types: 'file image media', file_picker_callback: function (cb, value, meta) { var input = document.createElement('input'); input.setAttribute('type', 'file'); input.setAttribute('accept', 'image/*,audio/*,video/*'); input.onchange = function () { var file = this.files[0]; var reader = new FileReader(); reader.onload = function () { var id = 'blobid' + (new Date()).getTime(); var blobCache = tinymce.activeEditor.editorUpload.blobCache; var base64 = reader.result.split(',')[1]; var blobInfo = blobCache.create(id, file, base64); blobCache.add(blobInfo); cb(blobInfo.blobUri(), { title: file.name }); }; reader.readAsDataURL(file); }; input.click(); },
where the images uploaded in tinymce are stored?
Expo Go crashing with on error message using Amplify Graphql to get an item
|graphql|expo|aws-amplify|expo-go|
null
I need to create multiple queries with various weightages and Properties. The simplified version of couple of queries is this ``` SELECT Emp_Id, (30 * ISNULL(BMI,0) + (20 * ISNULL(Height,0) + (10 * ISNULL(Eyesight,0)) from MyTable1 where Category = 'Fighter' SELECT Emp_Id, (10 * ISNULL(BMI,0) + (10 * ISNULL(Height,0) + (20 * ISNULL(Skill,0) + (40 * ISNULL(Eyesight,0)) from MyTable1 where Category = 'Sniper' ``` There are 100s of queries with different weightages and properties. So I wanted to create a table with Weightages and Properties. Then create dynamic query which would be executed since it will be much easier to maintain. Below is my code so far ``` /* Dummy Table Creation */ DECLARE @DummyWeightageTable TABLE (Category varchar(50), Fieldname varchar(50), Weightage real) insert into @DummyWeightageTable values ('Sniper', 'Eyesight', 40), ('Sniper', 'BMI', 10), ('Sniper', 'Height', 10), ('Sniper', 'Skill', 20), ('Fighter', 'Eyesight', 10), ('Fighter', 'BMI', 30), ('Fighter', 'Height', 20) /* Actual Functionality */ DECLARE @sql VARCHAR(MAX) DECLARE @delta VARCHAR(MAX) DECLARE @TempTableVariable TABLE (Fieldname varchar(50), Weightage real) insert into @TempTableVariable select Fieldname, Weightage from @DummyWeightageTable where Category = 'Sniper' set @sql = 'SELECT Emp_Id,' /*Do below step for all rows*/ select @delta = '(', Weightage, ' * ISNULL(', Fieldname, ',0) +' from @TempTableVariable set @sql = @sql + @delta + '0) from MyDataTable1' EXEC sp_executesql @sql; Truncate @TempTableVariable insert into @TempTableVariable select Fieldname, Weightage from @DummyWeightageTable where Category = 'Fighter' set @sql = 'SELECT Emp_Id,' /*Do below step for all rows*/ select @delta = '(', Weightage, ' * ISNULL(', Fieldname, ',0) +' from @TempTableVariable set @sql = @sql + @delta + '0) from MyDataTable1' EXEC sp_executesql @sql; ``` However Sql Server doesn't allow Arrays. So I am getting an error when I try to populate variable @delta Msg 141, Level 15, State 1, Line 15 A SELECT statement that assigns a value to a variable must not be combined with data-retrieval operations. I feel there must be some workaround for this but I couldn't find it.
|android|kotlin|mobile|
{"Voters":[{"Id":6340496,"DisplayName":"s3dev"},{"Id":298225,"DisplayName":"Eric Postpischil"},{"Id":4142924,"DisplayName":"Weather Vane"}],"SiteSpecificCloseReasonIds":[13]}
I have a list of things and want to get them in two seperate colums. Can I do that by using search and replace ? or How do I do this ? Here is an example How it is ![how it is now](https://i.stack.imgur.com/ZwIC0.png) How it should be ![how it should be](https://i.stack.imgur.com/Jl8Ly.png)
|java|apache-spark|bean-io|
I need to create multiple queries with various weightages and Properties. The simplified version of couple of queries is this ``` SELECT Emp_Id, (30 * ISNULL(BMI,0) + (20 * ISNULL(Height,0) + (10 * ISNULL(Eyesight,0)) from MyTable1 where Category = 'Fighter' SELECT Emp_Id, (10 * ISNULL(BMI,0) + (10 * ISNULL(Height,0) + (20 * ISNULL(Skill,0) + (40 * ISNULL(Eyesight,0)) from MyTable1 where Category = 'Sniper' ``` There are 100s of queries with different weightages and properties. So I wanted to create a table with Weightages and Properties. Then create dynamic query which would be executed since it will be much easier to maintain. Below is my code so far ``` /* Dummy Table Creation */ DECLARE @DummyWeightageTable TABLE (Category varchar(50), Fieldname varchar(50), Weightage real) insert into @DummyWeightageTable values ('Sniper', 'Eyesight', 40), ('Sniper', 'BMI', 10), ('Sniper', 'Height', 10), ('Sniper', 'Skill', 20), ('Fighter', 'Eyesight', 10), ('Fighter', 'BMI', 30), ('Fighter', 'Height', 20) /* Actual Functionality */ DECLARE @sql VARCHAR(MAX) DECLARE @delta VARCHAR(MAX) DECLARE @TempTableVariable TABLE (Fieldname varchar(50), Weightage real) insert into @TempTableVariable select Fieldname, Weightage from @DummyWeightageTable where Category = 'Sniper' set @sql = 'SELECT Emp_Id,' /*Do below step for all rows*/ select @delta = '(', Weightage, ' * ISNULL(', Fieldname, ',0) +' from @TempTableVariable set @sql = @sql + @delta + '0) from MyDataTable1' EXEC sp_executesql @sql; Truncate @TempTableVariable insert into @TempTableVariable select Fieldname, Weightage from @DummyWeightageTable where Category = 'Fighter' set @sql = 'SELECT Emp_Id,' /*Do below step for all rows*/ select @delta = '(', Weightage, ' * ISNULL(', Fieldname, ',0) +' from @TempTableVariable set @sql = @sql + @delta + '0) from MyDataTable1' EXEC sp_executesql @sql; ``` However Sql Server doesn't allow Arrays. So I am getting an error when I try to populate variable @delta > Msg 141, Level 15, State 1, Line 15 A SELECT statement that assigns a > value to a variable must not be combined with data-retrieval > operations. I feel there must be some workaround for this but I couldn't find it.
{"Voters":[{"Id":1235698,"DisplayName":"Marcin Orlowski"}],"DeleteType":1}
I have the same problem with Docker SQL Server 2019 under MacOS. I assigned 777 to the whole Data directory (from MacOS). I am able to create the new database but the restore will fail anyway even with the OVERWRITE option. What worked for me was to restore the backup in a SQL Server running on a virtualized Windows, then detach the database, copy the MDF & LDF files to the docker's data folder and attach it back on the dockerized SQL Server. If I then backup that database from the dockerized SQL Server, I can restore it with no problem.
I have a large CSV file without a header row, and the header is available to me as a vector. I want to use a subset of the columns of the file without loading the entire file. The subset of columns required are provided as a separate list. ``` 1,2,3,4 5,6,7,8 9,10,11,12 ``` ```R header <- c("A", "B", "C", "D") subset <- c("D", "B") ``` So far I have been reading the data in the following manner, which gets me the result I want, but loads the entire file first. ```R # Setup library(readr) write.table( structure(list(V1 = c(1L, 5L, 9L), V2 = c(2L, 6L, 10L), V3 = c(3L, 7L, 11L), V4 = c(4L, 8L, 12L)), class = "data.frame", row.names = c(NA, -3L)), file="sample-data.csv", row.names=FALSE, col.names=FALSE, sep="," ) header <- c("A", "B", "C", "D") subset <- c("D", "B") # Current approach df1 <- read_csv( "sample-data.csv", col_names = header )[subset] df1 ``` ``` # A tibble: 3 × 2 D B <dbl> <dbl> 1 4 2 2 8 6 3 12 10 ``` How can I get the same result without loading the entire file first? Related questions - [Only read selected columns](https://stackoverflow.com/q/5788117/21891079) includes the header in the first row. - [Ways to read only select columns from a file into R? (A happy medium between `read.table` and `scan`?) [duplicate]](https://stackoverflow.com/q/2193742/21891079) does not specify column names outside the file and the answers do not apply to this situation. - [how to skip reading certain columns in readr [duplicate]](https://stackoverflow.com/q/31150351/21891079) is different because it seems to be about skipping an unknown first column and reading a known second and third column across multiple files. Data types are not necessarily known in advance in this question. - [Is there a way to omit the first column when reading a csv [duplicate]](https://stackoverflow.com/q/14527466/21891079): column is skipped based on position, not position in an externally provided list of column names.
How to read specific columns of a CSV when given the header as a vector
|r|csv|readr|
Defining it entirely outside the class works, but feels less nice that @python_user's solution ``` class RawEmergency(InputBase, RawTables): __tablename__ = "emergency" id: Mapped[UNIQUEIDENTIFIER] = mapped_column( UNIQUEIDENTIFIER(), primary_key=True, autoincrement=False ) attendance_id: Mapped[str | None] = guid_column() admitted_spell_id: Mapped[str | None] = guid_column() __table_args__ = ( PrimaryKeyConstraint("id", mssql_clustered=False), Index( "index_emergency_pii_patient_id_and_datetimes", pii_patient_id, attendance_start_date.desc(), attendance_start_time.desc(), ), ) Index( "index_emergency_refresh_date_time", RawEmergency.__table__.c.refresh_date.desc(), RawEmergency.__table__.c.refresh_time.desc(), )
|laravel|tinymce|tinymce-6|
I wrote a code and it works quite well. The only problem is that it takes a little too long to complete. The loop should be able to process up to a maximum of 20 data sets (10 from LOOP1 and another 10 from LOOP2) at once. If I start the function with 2 data sets, for example, I currently have a waiting time of 10-20s. Maybe you have tips on how I can make the script faster. I've already tried a few things: - while - foreach - const and let instead of var But nothing made the script run faster. greeting **EDIT: Sorry, but the link is not helping me, because there is no textfinder in the loop and I don't know how I can implement this on my code. Can someone help me, please...!** <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> function data (){ var ss = SpreadsheetApp.getActiveSpreadsheet(); var as = SpreadsheetApp.getActiveSheet(); var email = Session.getActiveUser().getEmail(); SpreadsheetApp.getActive().toast("database is being updated", "...loading", 15); var targetSheet1 = ss.getSheetByName("sheet1"); var targetSheet2 = ss.getSheetByName("sheet2"); var date = Utilities.formatDate(new Date(), ss.getSpreadsheetTimeZone(), "dd.MM.yyyy' 'HH:mm:ss"); var rangeE = as.getRange('M9').getValue(); var rangeB = as.getRange('M22').getValue(); var collect = as.getRange('G2').getValue(); var rangefind = targetSheet2.getRange('F2:F'); // LOOP1 if (rangeE != 0) { var row1 = 9 + rangeE; var statusE = "B"; for(var i = 0; i < rangeE; i++){ if(as.getRange(row1-i, 12, 1, 1).getValue() == true){ var order1 = as.getRange(row1-i, 6, 1, 1).getValue(); var batch1 = as.getRange(row1-i, 7, 1, 1).getValue(); var nc1 = as.getRange(row1-i, 8, 1, 1).getValue(); var rm1 = as.getRange(row1-i, 9, 1, 1).getValue(); var vz1 = as.getRange(row1-i, 10, 1, 1).getValue(); var lastRow2 = targetSheet1.getLastRow(); targetSheet1.getRange(lastRow2 + 1, 1, 1, 9).setValues([[date, email, collect, statusE, order1, batch1, nc1, rm1, vz1]]); ``` How I can make it with textFinder - thats my problem ``` var textFinder = rangefind.createTextFinder(order1).matchEntireCell(true); var result = textFinder.findAll(); var row2 = result[result.length - 1].getRow(); targetSheet2.getRange(row2, 2, 1, 4).setValues([[date, email, collect, statusE]]); } } var rangecheckE = as.getRange(10, 12, 10, 1); rangecheckE.uncheck(); } // LOOP2 if (rangeB != 0) { var row1 = 22 + rangeB; var statusB = "A"; for(var i = 0; i < rangeB; i++){ if(as.getRange(row1-i, 12, 1, 1).getValue() == true){ var order2 = as.getRange(row1-i, 6, 1, 1).getValue(); var batch2 = as.getRange(row1-i, 7, 1, 1).getValue(); var nc2 = as.getRange(row1-i, 8, 1, 1).getValue(); var rm2 = as.getRange(row1-i, 9, 1, 1).getValue(); var vz2 = as.getRange(row1-i, 10, 1, 1).getValue(); var lastRow3 = targetSheet1.getLastRow(); targetSheet1.getRange(lastRow3 + 1, 1, 1, 9).setValues([[date, email, collect, statusB, order2, batch2, nc2, rm2, vz2]]); ``` How I can make it with textFinder - thats my problem ``` var textFinder = rangefind.createTextFinder(order2).matchEntireCell(true); var result = textFinder.findAll(); var row2 = result[result.length - 1].getRow(); targetSheet2.getRange(row2, 2, 1, 4).setValues([[date, email, collect, statusB]]); } } var rangecheckB = as.getRange(23, 12, 10, 1); rangecheckB.uncheck(); } SpreadsheetApp.getActive().toast("Data has been updated successfully", "OK", 6); } <!-- end snippet --> **UPDATE1: Here is the new version from my script and it's really a litle bit faster.** <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> function data (){ SpreadsheetApp.getActive().toast("database is being updated", "...loading", 15); var ss = SpreadsheetApp.getActiveSpreadsheet(); var as = SpreadsheetApp.getActiveSheet(); var targetSheet1 = ss.getSheetByName("sheet1"); var targetSheet2 = ss.getSheetByName("sheet2"); var rangecheck1 = as.getRange(10, 12, 10, 1); var rangecheck2 = as.getRange(23, 12, 10, 1); var rangefind = targetSheet2.getRange('F2:F'); var date = Utilities.formatDate(new Date(), "GMT+1", "dd.MM.yyyy' 'HH:mm:ss"); var email = Session.getActiveUser().getEmail(); var collect = as.getRange('G2').getValue(); var dataE = as.getRange('F10:L19').getValues(); var dataB = as.getRange('F23:L32').getValues(); var outdataE = []; var outdataB = []; for (var i = 0; i < 10; i++) { if (dataE[i][6] === true && dataE[i][0] > 0) { outdataE.push([dataE[i][0],dataE[i][1],dataE[i][2],dataE[i][3],dataE[i][4]]); } if (dataB[i][6] === true && dataB[i][0] > 0) { outdataB.push([dataB[i][0],dataB[i][1],dataB[i][2],dataB[i][3],dataB[i][4]]); } } if (outdataE.length > 0) { var statusE = "B"; for (var i = 0; i < outdataE.length; i++) { var lastRow = targetSheet1.getLastRow(); var rowi = rangefind.createTextFinder(outdataE[i][0]).matchEntireCell(true).findNext().getRow(); targetSheet1.getRange(lastRow + 1, 1, 1, 9).setValues([[date, email, collect, statusE, outdataE[i][0], outdataE[i][1], outdataE[i][2], outdataE[i][3], outdataE[i][4]]]); targetSheet2.getRange(rowi, 2, 1, 4).setValues([[date, email, collect, statusE]]); } } if (outdataB.length > 0) { var statusB = "A"; for (var i = 0; i < outdataB.length; i++) { var lastRow = targetSheet1.getLastRow(); var rowi = rangefind.createTextFinder(outdataB[i][0]).matchEntireCell(true).findNext().getRow(); targetSheet1.getRange(lastRow + 1, 1, 1, 9).setValues([[date, email, collect, statusB, outdataB[i][0], outdataB[i][1], outdataB[i][2], outdataB[i][3], outdataB[i][4]]]); targetSheet2.getRange(rowi, 2, 1, 4).setValues([[date, email, collect, statusB]]); } } rangecheck1.uncheck(); rangecheck2.uncheck(); SpreadsheetApp.getActive().toast("Data has been updated successfully", "OK", 6); } <!-- end snippet --> Do you think I can make it much more faster with another modification? Or is thats the limit...? **UPDATE2: Here are screenshots from my database for a better understanding.** [targetsheet1][1] [targetsheet2][2] 1. the data comes IN with a formula (maximum 10 rows at once time) 2. the script copies every row from the IN-data at lastRow in targetsheet1 3. the script searches with textfinder the order from every row of the IN-data in targetsheet2. If there's a match the targetsheet2-data in this line is overwritten with the new IN-data I hope you can understand a bit better, what my script should do... **UPDATE3: Here is a sample-data** [SampleData][3] [1]: https://i.stack.imgur.com/unQfc.png [2]: https://i.stack.imgur.com/5i8Fy.png [3]: https://docs.google.com/spreadsheets/d/1TIHEbuZe1d3BIq4xl7fpfJZbyCnEo5iCepv0zdABM9c/edit#gid=1803017680 I just can't get any further and need help. the data is larger in the original file. Therefore, the script only requires a fraction of the processing time. I use the text finder to look for the same number in sheet2 of my table as in main. This data record is then adjusted accordingly in sheet2. I hope I have described it in a somewhat understandable way.
I have created one web application. this application I want to deploy in oracle weblogic server, but getting error "Caused By: javax.faces.FacesException: Unable to find CDI BeanManager". How to solve this error??? Is anything need to add in this above files. I have configured datasource in weblogic server. , ensure that the container is configured correctly to support CDI. This may involve checking the deployment descriptors (web.xml) ``` web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_4_0.xsd" version="4.0"> </web-app> weblogic.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE wls:weblogic-web-app [ <!ELEMENT wls:weblogic-web-app (wls:context-root|wls:container-descriptor)*> <!ATTLIST wls:weblogic-web-app xmlns:wls CDATA #REQUIRED xmlns:xsi CDATA #REQUIRED xsi:schemaLocation CDATA #REQUIRED> <!ELEMENT wls:context-root (#PCDATA)> <!ELEMENT wls:container-descriptor (wls:prefer-application-packages)*> <!ELEMENT wls:prefer-application-packages (wls:package-name)*> <!ELEMENT wls:package-name (#PCDATA)> ]> <wls:weblogic-web-app xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-web-app" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee https://java.sun.com/xml/ns/javaee/ejb-jar_3_0.xsd http://xmlns.oracle.com/weblogic/weblogic-web-app http://xmlns.oracle.com/weblogic/weblogic-web-app/1.4/weblogic-web-app.xsd"> <wls:container-descriptor> <wls:prefer-application-packages> <wls:package-name>org.slf4j</wls:package-name> </wls:prefer-application-packages> </wls:container-descriptor> </wls:weblogic-web-app> pom.xml <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.5.9</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com.example</groupId> <artifactId>ejb_exporter</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>war</packaging> <name>ejb_exporter</name> <description>Demo project for Spring Boot</description> <properties> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-thymeleaf</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>javax.enterprise</groupId> <artifactId>cdi-api</artifactId> <version>1.2</version> </dependency> <dependency> <groupId>com.oracle.database.jdbc</groupId> <artifactId>ojdbc8</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>1.18.28</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> ```
weblogic.application.ModuleException: javax.faces.FacesException: Unable to find CDI BeanManager
|java|oracle|spring-boot|maven|weblogic|
null
|spring|oracle-database|jpa|oracle19c|
I've been using the React Native Video package with FlashList, and I've encountered an issue where all the videos seem to squeeze and resize themselves or flicker when they first render while scrolling. After a few seconds, they render fine as expected. I've been facing this problem for a while. Has anyone else experienced this issue? I would really appreciate any help or advice. Thanks!
videos are resizing themselves while scrolling
|reactjs|react-native|video|react-native-video|
I can run my flutter project witth Android Studio with "--dart-define-file-from=" config and it works. But when I try to run same project's android module( for debugging native code I open android folder with Android Studio) it doesn't see my config file. I know I have to clarify it like editting run/debug configurations, but I don't know where to do it. Can you please provide me exact command or a solution? My flutter command to run the code is : flutter run --dart-define-from-file=base_config.json what should I do ? I tried Compile options in Gradle-Android Compiler I wrote this command : -Pdart-define-from-file=base_config.json but it doesn't work.
How to Run Flutter Android Project Native Debug (android-> app module) with "--dart-define-file-from= config.json" config?
|android|flutter|android-studio|flutter-run|dart-define|
null
all I have a project that will host Grafana app (whole app not only dashboard/snapshots) inside my website inside an iframe, I'm trying to set a SSO so my users wouldn't have to login to my site then into their Grafana account inside my website. Grafana version and OS: - Grafana 10.4.1 enterprise on windows 10 I followed these instructions: - https://community.grafana.com/t/automatic-login-to-grafana-from-web-application/16801 - https://community.grafana.com/t/grafana-auto-login-from-angular-button-click/71813 I expected to have Grafana auto login the user and open their home route/dashboards that's my custom.ini file: \`\`\` \[server\] protocol = http http_addr = 127.0.0.1 http_port = 8080 domain = 127.0.0.1 enforce_domain = false \[security\] allow_embedding = true \[auth\] login_cookie_name = grafana_session disable_login = false login_maximum_inactive_lifetime_duration = login_maximum_lifetime_duration = token_rotation_interval_minutes = 10 disable_login_form = false api_key_max_seconds_to_live = -1 \[auth.anonymous\] enabled = true org_name = Main Org. org_role = Viewer \[auth.basic\] enabled = false \[auth.proxy\] enabled = true header_name = X-WEBAUTH-USER header_property = username auto_sign_up = true sync_ttl = 60 whitelist = headers = enable_login_token = false \`\`\`
Curiously, MySQL Workbench allows a Foreign Key Comment to be entered on the foreign key, but it does not generate it when synching to a database. Entering a Foreign Key Comment in MySQL Workbench causes that table to keep showing up when synching as a change... very annoying!
I have linked properly the `List-Adapter` and the other `list-item` components. It didn't getting the `android:id` of `activity_main_cake_items.xml` and giving this error. Cannot resolve symbol 'listviewCakes' This is my files : > **MainCakeItems.java** ``` package com.example.dessertshop; import androidx.appcompat.app.AppCompatActivity; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.Adapter; import android.widget.AdapterView; import com.example.dessertshop.databinding.ActivityMainBinding; import java.util.ArrayList; public class MainCakeItems extends AppCompatActivity { ActivityMainBinding binding; ListAdapterCakes listAdapterCakes; ArrayList<ListDataCakes> dataCakesArrayList = new ArrayList<>(); ListDataCakes listDataCakes; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); binding = ActivityMainBinding.inflate(getLayoutInflater()); // setContentView(R.layout.activity_main_cake_items); setContentView(binding.getRoot()); int[] imageCakeList = {R.drawable.vanilla_cake, R.drawable.chocolate_cake, R.drawable.strawberry_cake}; int[] idCakeList = {R.string.vanillaCakeId, R.string.chocolateCakeId, R.string.strawberryCakeId}; int[] titleCakeList = {R.string.vanillaCakeTitle, R.string.chocolateCakeTitle, R.string.strawberryCakeTitle}; int[] detailsCakeList = {R.string.vanillaCakeDetails, R.string.chocolateCakeDetails, R.string.strawberryCakeDetails}; String[] titleList = {"Vanilla Cake", "Chocolate Cake", "Strawberry Cake"}; String[] idList = {"01", "02", "03"}; for (int i = 0; i < imageCakeList.length; i++) { listDataCakes = new ListDataCakes(titleList[i],idList[i], imageCakeList[i],idCakeList[i], titleCakeList[i], detailsCakeList[i]); dataCakesArrayList.add(listDataCakes); } listAdapterCakes = new ListAdapterCakes(MainCakeItems.this,dataCakesArrayList); binding.listviewCakes.setAdapter(listAdapterCakes); // ERROR binding.listviewCakes.setClickable(true); // ERROR binding.listviewCakes.setOnItemClickListener(new AdapterView.OnItemClickListener() { // ERROR @Override public void onItemClick(AdapterView<?> adapterView, View view, int i, long l){ Intent intent = new Intent(MainCakeItems.this, DetailsCakes.class); intent.putExtra("image", imageCakeList[i]); intent.putExtra("id", idCakeList[i]); intent.putExtra("title", titleCakeList[i]); intent.putExtra("details", detailsCakeList[i]); intent.putExtra("title", titleList[i]); intent.putExtra("id", idList[i]); startActivity(intent); } }); } } ``` > **activity_main_cake_items.xml** ``` <?xml version="1.0" encoding="utf-8"?> <androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@drawable/background_image" tools:context=".MainCakeItems"> <ListView android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/listviewCakes" // Here it should get the value of the variable android:scrollbars="vertical" android:layout_marginStart="10dp" android:layout_marginEnd="10dp" android:layout_marginTop="12dp" tools:listitem="@layout/list_item_cakes" android:divider="@android:color/transparent" android:dividerHeight="10.0sp"> </ListView> </androidx.constraintlayout.widget.ConstraintLayout> ``` > **ListAdapterCakes.java** ``` package com.example.dessertshop; import android.content.Context; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ArrayAdapter; import android.widget.ImageView; import android.widget.TextView; import androidx.annotation.NonNull; import androidx.annotation.Nullable; import java.util.ArrayList; public class ListAdapterCakes extends ArrayAdapter<ListDataCakes> { public ListAdapterCakes(@NonNull Context context, ArrayList<ListDataCakes> dataCakesArrayList) { super(context, R.layout.list_item_cakes, dataCakesArrayList); } @NonNull @Override public View getView(int position, @Nullable View view, @NonNull ViewGroup parent) { ListDataCakes listDataCakes = getItem(position); if (view == null) { view = LayoutInflater.from(getContext()).inflate(R.layout.list_item_cakes, parent, false); } ImageView listImageCakes = view.findViewById(R.id.listImageCakes); TextView listTitleCakes = view.findViewById(R.id.listTitleCakes); // TextView listDetails = view.findViewById(R.id.listCakeDetails); TextView listCakesId = view.findViewById(R.id.listCakesId); listImageCakes.setImageResource(listDataCakes.image); listTitleCakes.setText(listDataCakes.title); // listDetails.setText(listDataCakes.details); listCakesId.setText(listDataCakes.id); return view; } } ```
Listview - Getting error while linking the items correctly in Java
|java|android|android-studio|android-listview|mobile-development|
{"Voters":[{"Id":6752050,"DisplayName":"273K"},{"Id":8877,"DisplayName":"Ðаn"},{"Id":6870253,"DisplayName":"chtz"}],"SiteSpecificCloseReasonIds":[13]}
I am using ionic 7 angular standalone but ion icons are not showing at all in web. i have also tried to add these scripts in index.html . <script type="module" src="https://cdn.jsdelivr.net/npm/ionicons@latest/dist/ionicons/ionicons.esm.js"></script> <script nomodule src="https://cdn.jsdelivr.net/npm/ionicons@latest/dist/ionicons/ionicons.js"></script> [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/ubHh0.jpg here is my code <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-html --> <ion-row> <ion-col size="2" class="ion-text-center"> <h6 class="m-0 f-md">4:00</h6> <h4 class="m-0 f-md bold">PM</h4> </ion-col> <ion-col size="6"> <div> <ion-text class="f-md bold">Launch with Juile</ion-text> </div> <div> <ion-text class="f-md">Family</ion-text> </div> </ion-col> <ion-col size="2"> <div> <ion-icon name="star" color="danger"></ion-icon> </div> </ion-col> </ion-row> <!-- end snippet --> This is my package.json dependencies <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-html --> "dependencies": { "@angular/animations": "^17.0.2", "@angular/common": "^17.0.2", "@angular/compiler": "^17.0.2", "@angular/core": "^17.0.2", "@angular/forms": "^17.0.2", "@angular/platform-browser": "^17.0.2", "@angular/platform-browser-dynamic": "^17.0.2", "@angular/router": "^17.0.2", "@capacitor/android": "5.7.4", "@capacitor/app": "5.0.7", "@capacitor/core": "5.7.4", "@capacitor/haptics": "5.0.7", "@capacitor/keyboard": "5.0.8", "@capacitor/status-bar": "5.0.7", "@ionic/angular": "^7.5.0", "ionicons": "^7.1.0", "rxjs": "~7.8.0", "swiper": "^11.1.0", "tslib": "^2.3.0", "uuid": "^9.0.1", "zone.js": "~0.14.2" }, <!-- end snippet -->
Ionic Angular Standalone ion-icon are not showing at all
|angular|ionic-framework|
I'm using the following IIS Rewrite Rule to block as many bots as possible. <rule name="BotBlock" stopProcessing="true"> <match url=".*" /> <conditions> <add input="{HTTP_USER_AGENT}" pattern="^$|bot|crawl|spider" /> </conditions> <action type="CustomResponse" statusCode="403" statusReason="Forbidden" statusDescription="Forbidden" /> </rule> This rule blocks all requests with an empty User-Agent string or a User-Agent string that contains `bot`, `crawl` and `spider`. This works great but it also blocks `googlebot`, which I do not want. So how do I exclude the `googlebot` string from the above pattern so it does hit the site. I've tried `^$|!googlebot|bot|crawl|spider` `^$|(?!googlebot)|bot|crawl|spider` `^(?!googlebot)$|bot|crawl|spider` `^$|(!googlebot)|bot|crawl|spider` But they either block all User-Agents or still do not allow googlebot. Who has a solution and knows a bit about regex? **So thanks to The fourth bird the solution becomes:** <add input="{HTTP_USER_AGENT}" pattern="^$|\b(?!googlebot\b)\w*(?:bot|crawl|spider)\w*" />
Zion Bokobza's answer helped me but that `Main1Component.lastComp = this;` ought to be in the onInit not in the onDestroy, or else `lastComp` will remain null until the component is destroyed: ``` ngOnInit(): void { if (!Main1Component.lastComp) { this.name = 'Zion'; Main1Component.lastComp = this; } } ```
You need to convert to a continuous scale to use minor ticks, since there are no minor breaks on a discrete axis: ``` r dt %>% ggplot(aes(var1, as.numeric(factor(ca)), fill = var2)) + geom_col(width = 0.8, orientation = 'y') + stat_summary(orientation = 'y', fun = sum, geom = "point", colour = "grey40", fill = "grey40", aes(shape = 'average'), size = 2) + geom_vline(xintercept=0, colour="grey30", linetype = "dotted") + scale_y_continuous('ca', labels = levels(factor(dt$ca)), breaks = seq_along(levels(factor(dt$ca)))) + scale_shape_manual(NULL, values = 20) + guides(y = guide_axis(minor.ticks = TRUE), fill = guide_legend(order = 1), shape = guide_legend(order = 2)) + theme(axis.minor.ticks.length.y = unit(3, 'mm'), axis.ticks.length.y = unit(0, 'mm'), legend.key = element_blank()) ``` [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/4pMxZ.jpg
null
null
Trying to understand the `useCallback()` hook in react. In a blog post someone said that it solves the following problem: > Before diving into useCallback() use, let's distinguish the problem useCallback() solves — the functions equality check. > >Functions in JavaScript are first-class citizens, meaning that a function is a regular object. The function object can be returned by other functions, be compared, etc.: anything you can do with an object. > > Let's write a function factory() that returns functions that sum > numbers: ```javascript function factory() { return (a, b) => a + b; } const sumFunc1 = factory(); const sumFunc2 = factory(); console.log(sumFunc1(1, 2)); // => 3 console.log(sumFunc2(1, 2)); // => 3 console.log(sumFunc1 === sumFunc2); // => false console.log(sumFunc1 === sumFunc1); // => true ``` >`sumFunc1` and `sumFunc2` are functions that sum two numbers. They've been created by the >`factory()` function. > >The functions `sumFunc1` and `sumFunc2` share the same code source, but they are different >function objects. Comparing them `sumFunc1 === sumFunc2` evaluates to false. > >That's just how JavaScript objects work. An object (including a function object) equals >only to itself. So my questions: 1. How `sumFunc1 === sumFunc2` returns false 2. How the useCallback() hook solves that 3. How `useCallback()` hook is actually used for memoization?
useCallback hook and functions equality check problem
|javascript|reactjs|react-hooks|hook|memoization|
I have a procedure which sets the validation list for cells in a range. I finally got it to work, getting the correct range to be set as validation list. But when I check the validation settings, the range is different. And I cannot understand what and why this happened. Below is the messagebox telling me the range for the validation list: [![correct validation list range][1]][1] But below is what the validation list range is that has been set: [![actual validation list range][2]][2] I just do not understand why this range is changed to A2:A7 instead of a1:a6. This is the code for the procedure: Sub validatie() Dim ws As Worksheet, ws1 As Worksheet Set ws = ThisWorkbook.Worksheets("Hoofdbestand") Set ws1 = ThisWorkbook.Worksheets("Verwijzingen") aantalrijen2 = ws1.Range("A1", ws1.Range("A1").End(xlDown)).Cells.Count With ws aantalrijen = ws.Range("A1", ws.Range("A1").End(xlDown)).Cells.Count With .Range("B2:B" & aantalrijen).Validation .Delete .Add Type:=xlValidateList, AlertStyle:=xlValidAlertStop, Formula1:="=" & ws1.Name & "!" & "A1:A" & aantalrijen2 MsgBox "=" & ws1.Name & "!" & "A1:A" & aantalrijen2 End With End With End Sub [1]: https://i.stack.imgur.com/ZIcCm.png [2]: https://i.stack.imgur.com/N051M.png
There is no way that EF Core is bypassing your triggers. There must be a bug in the trigger logic. Your trigger as it stands may or may not be correct, but it uses a huge amount of Bad Stuff: * Cursors, instead of a joined update. * You would need to join `inserted` and `deleted` using a full-join, although in this case, because you want a qiantity difference, it could make more sense to use `UNION ALL` and `GROUP BY`. * Use of three-part column names. * Lack of table aliases to make it more readable. * Checking `TRIGGER_NESTLEVEL` without passing an object ID. * One would hope that `(WarehouseId, ComponentId, ColorId)` is a primary or unique key in both tables, otherwise you may end up updating multiple rows. Your trigger should look like this: ```tsql CREATE OR ALTER TRIGGER dbo.tr_update_stocks ON dbo.WarehouseComponentLoadRows AFTER INSERT, UPDATE, DELETE AS IF @@ROWCOUNT = 0 OR TRIGGER_NESTLEVEL(@@PROCID) > 1 RETURN; SET NOCOUNT ON; UPDATE wc SET StockQuantity += i.DiffQuantity, AvailableQuantity += i.DiffQuantity FROM dbo.WarehouseComponents wc JOIN ( SELECT i.WarehouseId, i.ComponentId, i.ColorId, SUM(i.Quantity) AS DiffQuantity FROM ( SELECT i.WarehouseId, i.ComponentId, i.ColorId, i.Quantity FROM inserted i UNION ALL SELECT d.WarehouseId, d.ComponentId, d.ColorId, -d.Quantity FROM deleted d ) i GROUP BY i.WarehouseId, i.ComponentId, i.ColorId ) i ON i.WarehouseId = wc.WarehouseId AND i.ComponentId = wc.ComponentId AND i.ColorId = wc.ColorId; ``` ____ **To be honest,** if you all you want is the total child quantity then **you should probably just use a view** instead of triggers. ```tsql CREATE VIEW dbo.TotalWarehouseComponentLoadRows WITH SCHEMABINDING AS SELECT wcl.WarehouseId, wcl.ComponentId, wcl.ColorId, SUM(wcl.Quantity) AS Quantity, COUNT_BIG(*) AS Count FROM dbo.WarehouseComponentLoadRows wcl GROUP BY wcl.WarehouseId, wcl.ComponentId, wcl.ColorId; ``` For extra performance, you could make that into an indexed view. ```tsql CREATE UNIQUE CLUSTERED INDEX IX ON WarehouseComponentLoadRows (WarehouseId, ComponentId, ColorId); ```
A typical trigger would look like this ```tsql CREATE TRIGGER Change_TOMOGRAFIA ON YourTable AFTER INSERT, UPDATE AS SET NOCOUNT ON; IF TRIGGER_NESTLEVEL(@@PROCID) > 1 RETURN; -- prevent recursion IF NOT UPDATE(no_stepdescription) OR NOT EXISTS (SELECT 1 FROM inserted) RETURN; -- early bail-out for 0 rows UPDATE t SET no_stepdescription = STUFF(t.no_stepdescription, 1, LEN('TOMOGRAFIA'), 'T.C') FROM YourTable t JOIN inserted i ON i.your_primary_key = t.your_primary_key WHERE i.no_stepdescription LIKE 'TOMOGRAFIA %'; ``` Note the use of a join back to the original table using the primary key. The `inserted` table may have zero or multiple rows, and you cannot modify it directly.
Actually, you can achieve the same result with some basic math : ```python df.withColumn( "rand",F.rand()*(F.col("max")-F.col("min"))+F.col("min") ) ``` The new columns will be in float but you can either trunc it or round it depending on your usecase.
Hi as the title suggest I am trying to write/edit xml using xsd with xmlschema. I am using this xsd (https://github.com/NREL/bcl-gem/blob/develop/schemas/v3/measure_v3.xsd) and this xml file (https://drive.google.com/file/d/1WKJVBjn6IjmO-EZX9yGC8AaTCynUugFC/view?usp=sharing) ``` schema = xmlschema.XMLSchema(xsd_path) print(schema.is_valid(xml_path)) d = xmlschema.to_json(xml_path, schema=schema) d = schema.to_dict(xml_path) print(d) json_data = json.dumps(d) xml = xmlschema.from_json(json_data, schema=schema, preserve_root=True) ``` However I keep getting this error ``` File "/home/lib/python3.12/site-packages/xmlschema/validators/schemas.py", line 2245, in encode for result in self.iter_encode(obj, path, validation, *args, **kwargs): File "/home/lib/python3.12/site-packages/xmlschema/validators/schemas.py", line 2229, in iter_encode raise XMLSchemaEncodeError(self, obj, self.elements, reason, namespaces=namespaces) xmlschema.validators.exceptions.XMLSchemaEncodeError: failed validating <class 'dict'> instance with XMLSchema10(name='measure_v3.xsd', namespace=''): Reason: unable to select an element for decoding data, provide a valid 'path' argument. ``` The issue is very similar to (https://github.com/sissaschool/xmlschema/issues/241) and (https://stackoverflow.com/questions/67027430/fail-to-use-xmlschema-from-json). In these posts it was suggested that. `it's a matter of namespaces, despite there is no prefix the data are still bound to namespace of the schema:` One can get the namespace from the xsd by using the command. However, the xsd provided to me does not have a target_namespace. ``` >>> CAPSchema.target_namespace 'urn:oasis:names:tc:emergency:cap:1.2' ``` I cannot figure out how to translate the solutions given for my case. I am not very familiar with xml and xsd, hope I am not missing something that is very obvious here. Thanks
I use shared preference in my project to store user information, but I encounter problems that the program encounters an error and does not load at all, and in terms of code, there is no problem with my code. But the application does not come up, How can I fix it? This is the code of the class where I store the information: public class UserManager { private SharedPreferences sharedPreferences; public UserManager(Context context) { sharedPreferences = context.getSharedPreferences("user_information", Context.MODE_PRIVATE); } public void saveUserInformation(String fullName, String email, String gender) { @SuppressLint("CommitPrefEdits") SharedPreferences.Editor editor = sharedPreferences.edit(); editor.putString("full_name", fullName); editor.putString("email", email); editor.putString("gender", gender); editor.apply(); } public String getFullName() { return sharedPreferences.getString("full_name", ""); } public String getEmail() { return sharedPreferences.getString("email", ""); } public String getGender() { return sharedPreferences.getString("gender", ""); } } And this is the code of the main class that I receive the information after saving it: public class MainActivity extends AppCompatActivity { private UserManager userManager; private String gender = ""; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); userManager = new UserManager(this); TextInputEditText fullNameEt = findViewById(R.id.et_main_fullName); fullNameEt.setText(userManager.getFullName()); TextInputEditText emailEt = findViewById(R.id.et_main_email); emailEt.setText(userManager.getEmail()); RadioGroup genderRadioGroup = findViewById(R.id.radioGroup_main_gender); genderRadioGroup.setOnCheckedChangeListener(new RadioGroup.OnCheckedChangeListener() { @Override public void onCheckedChanged(RadioGroup group, int checkedId) { if (checkedId == R.id.btn_main_male) { gender = "male"; } else { gender = "female"; } } }); gender = userManager.getGender(); if (gender.equalsIgnoreCase("male")) { genderRadioGroup.check(R.id.btn_main_male); } else if (gender.equalsIgnoreCase("female")) { genderRadioGroup.check(R.id.btn_main_female); } View saveBtn = findViewById(R.id.btn_main_save); saveBtn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { userManager.saveUserInformation(fullNameEt.getText().toString(), emailEt.getText().toString(), gender); } }); } }
How I can use the shared preferences class in Java and Android studio?
|java|android|kotlin|android-studio|android-developer-api|
null
If you're writing something that does "the same thing, with just one thing changing at each step", that's a loop, or if you foresee a situation that does warrant `if` statements, then you generally want to resolve them such that you handle "the largest thing first" (to ensure there's no fall-through) or by using `if-else` statements (also so there's no fall-though). You might consider a switch, but the switch statement is a hold-over from programming languages that didn't have dictionaries/key-value objects to perform O(1) lookups with, which JS, Python, etc. all do. So in JS you're almost _always_ better off using a mapping object with your case values as property keys, turning an O(n) code path with a switch into an O(1) immediate lookup) However, in this case what we're really doing is simple text matching, so you can use the best tool in the toolset for that: you can trivially get both the `#` sequence and "remaining text" with a regex, and then generate the replacement HTML [using the captured data](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/replace#replacement): <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> function markdownToHTML(doc) { return convertMultiLineMD(doc.split(`\n`)).join(`\n`); } function convertMultiLineMD(lines) { // convert tables, lists, etc, while also making sure // to perform inline markup conversion for any content // that doesn't span multiple lines. For the purpose of // this answer, we're going to ignore multi-line entirely: return convertInlineMD(lines); } function convertInlineMD(lines) { return lines.map((line) => { // convert headings line = line.replace( // two capture groups, one for the markup, and one for the heading, // with a third optional group so we don't capture EOL whitespace. /^(#+)\s+(.+?)(\s+)?$/, // and we extract the first group's length immediately (_, { length: h }, text) => `<h${h}>${text}</h${h}>` ); // then wrap bare text in <p>, convert bold, italic, etc. etc. return line; }); } // And a simple test based on what you indicated: const docs = [`## he#llo\nthere\n# yooo `, `# he#llo\nthere\n## yooo`]; docs.forEach((doc, i) => console.log(`[doc ${i + 1}]\n`, markdownToHTML(doc))); <!-- end snippet --> Note, though, that even this is still a naive approach to writing a transpiler, with rather poor runtime performance compared to writing [a stack parser](https://spec.commonmark.org/0.31.2/#appendix-a-parsing-strategy) or [a DFA](https://en.wikipedia.org/wiki/Deterministic_finite_automaton) based on the markdown grammar (the "markup language specification" grammar, i.e. the rules that say which tokens can follow which other tokens), where you run through your document by tracking what kind of token we're dealing with, and convert on the fly as we pass token terminations. (This is, in fact, how regular expressions work: they generate a DFA from the [regular grammar](https://en.wikipedia.org/wiki/Regular_grammar) pattern you specify, then run the input through that DFA, achieving near-perfect runtime performance)
Shortly, here is the answer [https://github.com/firebase/flutterfire/issues/4300#issuecomment-916883580][1] I did a fork to the package, added it to my project's package folder, and made that condition always true.
|astrojs|
I have a `VGG19` trained with 224x224x3 images for binary classification using `flow_from_dataframe` and a `Places365` TensorFlow-version network that are combined to create a new model. When I have to evaluate this new model I think I need an `ImageDataGenerator` with a `flow_from_dataframe` to access the images and their labels in a `.csv` file (Total of 400 images), but then I have the following error: ValueError: Exception encountered when calling layer 'vgg19' (type Functional). Input 0 of layer "block1_conv1" is incompatible with the layer: expected min_ndim=4, found ndim=2. Full shape received: (None, None) The `VGG19` is receiving the same image format as when it was trained, so I don't really understand why this is happening. The class for the `Places365` model: ``` from __future__ import division, print_function import os import pickle import warnings import numpy as np from keras import backend as K from keras.layers import Input from keras.layers import Activation, Dense, Flatten from keras.layers import MaxPooling2D from keras.models import Model from keras.layers import Conv2D from keras.regularizers import l2 from keras.layers import Dropout from keras.layers import GlobalAveragePooling2D from keras.layers import GlobalMaxPooling2D from keras.utils import get_source_inputs from keras.utils import get_file from keras.preprocessing import image from keras.applications.imagenet_utils import preprocess_input WEIGHTS_PATH = 'https://github.com/GKalliatakis/Keras-VGG16-places365/releases/download/v1.0/vgg16-places365_weights_tf_dim_ordering_tf_kernels.h5' WEIGHTS_PATH_NO_TOP = 'https://github.com/GKalliatakis/Keras-VGG16-places365/releases/download/v1.0/vgg16-places365_weights_tf_dim_ordering_tf_kernels_notop.h5' def VGG16_Places365(weights='places', input_shape=None, pooling=None, classes=365): img_input = Input(shape=input_shape) # Block 1 x = Conv2D(filters=64, kernel_size=3, strides=(1, 1), padding='same', kernel_regularizer=l2(0.0002), activation='relu', name='block1_conv1_365')(img_input) x = Conv2D(filters=64, kernel_size=3, strides=(1, 1), padding='same', kernel_regularizer=l2(0.0002), activation='relu', name='block1_conv2_365')(x) x = MaxPooling2D(pool_size=(2, 2), strides=(2, 2), name="block1_pool_365", padding='valid')(x) # Block 2 x = Conv2D(filters=128, kernel_size=3, strides=(1, 1), padding='same', kernel_regularizer=l2(0.0002), activation='relu', name='block2_conv1_365')(x) x = Conv2D(filters=128, kernel_size=3, strides=(1, 1), padding='same', kernel_regularizer=l2(0.0002), activation='relu', name='block2_conv2_365')(x) x = MaxPooling2D(pool_size=(2, 2), strides=(2, 2), name="block2_pool_365", padding='valid')(x) # Block 3 x = Conv2D(filters=256, kernel_size=3, strides=(1, 1), padding='same', kernel_regularizer=l2(0.0002), activation='relu', name='block3_conv1_365')(x) x = Conv2D(filters=256, kernel_size=3, strides=(1, 1), padding='same', kernel_regularizer=l2(0.0002), activation='relu', name='block3_conv2_365')(x) x = Conv2D(filters=256, kernel_size=3, strides=(1, 1), padding='same', kernel_regularizer=l2(0.0002), activation='relu', name='block3_conv3_365')(x) x = MaxPooling2D(pool_size=(2, 2), strides=(2, 2), name="block3_pool_365", padding='valid')(x) # Block 4 x = Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same', kernel_regularizer=l2(0.0002), activation='relu', name='block4_conv1_365')(x) x = Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same', kernel_regularizer=l2(0.0002), activation='relu', name='block4_conv2_365')(x) x = Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same', kernel_regularizer=l2(0.0002), activation='relu', name='block4_conv3_365')(x) x = MaxPooling2D(pool_size=(2, 2), strides=(2, 2), name="block4_pool_365", padding='valid')(x) # Block 5 x = Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same', kernel_regularizer=l2(0.0002), activation='relu', name='block5_conv1_365')(x) x = Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same', kernel_regularizer=l2(0.0002), activation='relu', name='block5_conv2_365')(x) x = Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same', kernel_regularizer=l2(0.0002), activation='relu', name='block5_conv3_365')(x) x = MaxPooling2D(pool_size=(2, 2), strides=(2, 2), name="block5_pool_365", padding='valid')(x) inputs = img_input # Create model. model = Model(inputs, x, name='vgg16-places365') # load weights weights_path = get_file('vgg16-places365_weights_tf_dim_ordering_tf_kernels_notop.h5', WEIGHTS_PATH_NO_TOP, cache_subdir='models') model.load_weights(weights_path) return model ``` The pre-trained model loading and the other methods: ``` import pandas as pd from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.models import load_model from tensorflow.keras.optimizers import Adam from keras.metrics import Precision, Recall from tensorflow.keras import losses model_vgg16_places365 = VGG16_Places365(weights='places', input_shape=(224, 224, 3)) def valid_generator(target_image_size, valid_dataset, valid_images_location): valid_data_gen = ImageDataGenerator(rescale=1./255) batch_size = 32 valid_generator_1 = valid_data_gen.flow_from_dataframe( target_size=target_image_size, dataframe=valid_dataset, directory=valid_images_location, x_col="id", y_col="T1", batch_size=batch_size ) valid_generator_2 = valid_data_gen.flow_from_dataframe( target_size=target_image_size, dataframe=valid_dataset, directory=valid_images_location, x_col="id", y_col="T1", batch_size=batch_size ) custom_generator = zip(valid_generator_1, valid_generator_2) return custom_generator ``` ``` vgg19_model_location = 'vgg19_trained.keras' vgg19_model = load_model(vgg19_model_location) combined_model = Model(inputs=[model_vgg16_places365.input, vgg19_model.input], outputs=vgg19_model.output) combined_model.compile(loss=losses.BinaryCrossentropy(), optimizer=Adam(learning_rate=0.0001), metrics= ['accuracy', Precision(), Recall()]) valid_dataset = pd.read_csv('valset.csv') valid_images_location = 'val-images/' target_image_size = (224,224) evaluation = combined_model.evaluate(valid_generator(target_image_size, valid_dataset, valid_images_location)) ```
|yaml|config|blogs|astrojs|decap-cms|
|javascript|astrojs|
Based on the resolution of [issue 809][0], I resolved the issue with pip install 'gdown>=5.1.0' [0]: https://github.com/mlcommons/GaNDLF/pull/809
|astrojs|
|javascript|pagespeed|astrojs|image-optimization|
I'm trying to use the relu function on a matrix in MATLAB, and am calling it as `A = relu(A)`, but I keep getting the error > Incorrect number or types of inputs or outputs for function relu. I tried it on a scalar as well, and also get the same error. What could be causing this?
MATLAB relu "Incorrect number or types of inputs or outputs for function relu"
|matlab|
> I'm allowed to use both a stencil buffer and a depth buffer for the same render pass. You can. > How to differentiate between them. They are different attachment points in the render pass, one called depth and one called stencil, so what problem are you having differentiating them? Fragments must survive both depth test and stencil test (and any other test you have enabled) to get rendered. Depth testing always completes first, because stencil testing requires the depth test result. BUT "depth bounds test" is something else entirely, and isn't a "depth test", and doesn't interact with stencil testing at all. It's an additional test, on top of depth test and stencil test. > However, I'm not totally convinced that I even need a depth buffer if I'm only using a depth bounds test and no other depth operations. Depth bounds is a check against the depth currently in the depth buffer (*not* the depth of the incoming primitive). If you think that you don't need a depth buffer, I don't think you understand what depth bounds testing does. Without a depth buffer it can't do anything useful ... > how can I ensure that it's interacting appropriately with the stencil buffer See above. The depth of the current fragment is irrelevant to depth bounds testing, so I don't think the feature does what you think it does.
I am trying to migrate a webservice from WCF to WCF Core. I want to add the help menu, but I don't see that it is the same as the WCF one. I want it to look like this [enter image description here](https://i.stack.imgur.com/FbeIL.png) But it looks like this now [enter image description here](https://i.stack.imgur.com/1vnWL.png) ``` public void ConfigureServices(IServiceCollection services) { services.AddTransient<IService,Service>(); services.AddServiceModelServices(); services.AddServiceModelMetadata(); services.AddSingleton<IServiceBehavior, UseRequestHeadersForMetadataAddressBehavior>(); SetCacheConnectionString(); SetAdministratorUser(); } public void Configure(IApplicationBuilder app) { app.UseServiceModel(serviceBuilder => { serviceBuilder.AddService<Service>(o => { o.DebugBehavior.HttpHelpPageEnabled = true; o.DebugBehavior.HttpsHelpPageEnabled = true; o.DebugBehavior.IncludeExceptionDetailInFaults = false; }); serviceBuilder.AddServiceEndpoint<Service,IService>(new BasicHttpBinding(CoreWCF.Channels.BasicHttpSecurityMode.None), "/Custom"); }); var sMB = app.ApplicationServices.GetRequiredService<ServiceMetadataBehavior>(); sMB.HttpGetEnabled = true; sMB.HttpsGetEnabled = true; } [ServiceContract] public interface IService { /// <summary> /// Creates trips using the provided information. /// </summary> /// <param name="importRequest"></param> /// <returns></returns> [OperationContract(AsyncPattern = false)] [WebInvoke(Method = "POST", ResponseFormat = WebMessageFormat.Json, BodyStyle = WebMessageBodyStyle.Bare, UriTemplate = "addcustom")] ImportResponse AddCustom(Stream stream); } ``` Here you can see the current result [enter image description here](https://i.stack.imgur.com/yGLR8.png)
Excel cell validation set by vba sets incorrect data range
|excel|vba|validation|range|
# Step 1: Create a Python 3 Virtual Environment - Conda: Absent ```bash virtualenv -p python3 <env name> ``` - Conda: Present and have sufficient disk space ```bash conda create -n <env name> python==<version> -y ``` - Conda: Present and optimized disk space ```bash conda create -prefix <./path/to/env_name> python==`<version>` -y ``` # Step 2: Activate the Virtual Environment - Conda: Absent and Linux system ```bash source <env name>/bin/activate ``` - Conda: Absent and Windows system ```bash my-python3-env\Scripts\activate ``` - Conda: Present ```bash conda activate <`env name` or `./path/to/env_name`> ``` # Step 3: Install the IPython Kernel Package ```bash pip install ipykernel pip install notebook ``` # Step 4: Register the Kernel with Jupyter ```bash python -m ipykernel install --user --name=<env name> ``` # Rename a jupyter notebook kernel display name ```bash jupyter kernelspec list ``` The above command will provide you a list of the installed Kernels, something like: ```bash Available kernels: python2 /Library/Python/2.7/site-packages/ipykernel/resources redisworkshop /Users/tague/Library/Jupyter/kernels/RedisWorkshop bash /Users/tague/Library/Jupyter/kernels/bash ``` The `display name` for a kernel is found in the **kernel.json** file in the corresponding directory for the kernel. Edit the `display_name` property in the **kernel.json** file and it will change the display name next time you start Jupyter.
1. I'll provide you an instance [using UniRx][1]. ```cs using UniRx; //or R3 public class ExampleWithUniRx : MonoBehaviour { IDisposable stream = null; public void Play(Text TextUI, string max, string min) { stream?.Dispose(); TimeSpan delay = TimeSpan.FromSeconds(3); var stringsArray = new string[3]{"Welcome to Number Wizard!" , "The highest number you can pick is " + max , "The lowest number you can pick is " + min}; stream = Observable.Timer(TimeSpan.Zero, delay, Scheduler.MainThreadIgnoreTimeScale) //this stream emits values (0,1,...) each *delay* starting from TimeSpan.Zero //If you wish to in app time scale has //influence on timer then use Scheduler.MainThread .TakeWhile(x => x < stringsArray.Length) .Select(x => stringsArray[x]) .Subscribe(text => TextUI.text = text); //Never forget to dispose stream //You can dispose in a OnDestroy method, for instance. //If you use this code in MonoBehaviour class, you can call .AddTo(this) after subscribe. //Then your stream will be connected to the lifetime of current gameObject } protected virtual void OnDestroy() { stream?.Dispose(); } } ``` You can [use R3][2] as well instead of UniRx. it's a rework of good old UniRx from the same author. But there are several extra steps to install it. 2. Another approach is to [use DoTween library][3]. It has included DOText method. And DOTween also allows to create pipelines as well. [![enter image description here][4]][4] This 2 approaches are more elegant and can be extensed much easier than approaches with coroutines and tasks IMHO. [1]: https://github.com/neuecc/UniRx/releases/tag/7.1.0 [2]: https://github.com/Cysharp/R3 [3]: https://assetstore.unity.com/packages/tools/animation/dotween-hotween-v2-27676 [4]: https://i.stack.imgur.com/544kZ.png
write xml using xsd using xmlschema_from_json giving error: Reason: unable to select an element for decoding data, provide a valid 'path' argument