instruction
stringlengths
0
30k
Ah - the answer is that the model won't get called unless the request contains a `Content-Type` header :/ ``` env) jhw@Justins-Air 3aa9d4f85d9709fc7cb44e886ba0808f % curl -i -H "Content-Type: application/json" -X POST https://xxx.yyy.zzz/hello-post -d "{\"message_\": \"Hello World\!\"}" HTTP/2 400 content-type: application/json content-length: 35 date: Fri, 15 Mar 2024 15:11:16 GMT x-amzn-requestid: 8418b102-b1f5-432c-9e42-308947c28bce access-control-allow-origin: * access-control-allow-headers: * x-amzn-errortype: BadRequestException x-amz-apigw-id: UrS7THMvDoEEhzg= x-cache: Error from cloudfront via: 1.1 47c1b2a882ab8226b0b44cb0c042b982.cloudfront.net (CloudFront) x-amz-cf-pop: LHR50-P8 x-amz-cf-id: i9TQrVxbc7fRLoC93F3o5Y_HRupAAux7f7B_TjySyddpBvpw_n3TNA== {"message": "Invalid request body"}% ```
I have two different ranges in excel and I want to save the ranges in one image. I use python. I tried to use union but It throws the exception "unknown.Union". Thank you for your help. excel = win32com.client.Dispatch('Excel.Application') excel.visible = False wb = excel.Workbooks.Open(self.source_path) ws = wb.Worksheets[0] #This is what I tried #ws.Union(ws.Range("B843:CZ847"), ws.Range("B4:CZ5")).Copy() img = ImageGrab.grabclipboard() imgFile = os.path.join(self.excel_target,self.excel_name) img.save(imgFile)
I want to sort the values of this dictionary from low to high: For example, input: ``` average = {'ali': 7.83, 'mahdi': 13.4, 'hadi': 16.2, 'hasan': 3.57} ``` I want the output to be like this: ``` {'hasan': 3.57, 'ali': 7.83, 'mahdi': 13.4, 'hadi': 16.2} ```
How to sort a dictionary from low to high in python?
|python-3.x|dictionary|sorting|
null
I am running statefulset: `prometheus-stack/prometheus-ps-prometheus` with 3 replicas (`prometheus-ps-prometheus-0`, `prometheus-ps-prometheus-1`, `prometheus-ps-prometheus-2`) ```bash $ kubectl get statefulsets -n prometheus-stack prometheus-ps-prometheus NAME prometheus-ps-prometheus $ kubectl get pods -n prometheus-stack -o wide NAME STATUS IP prometheus-ps-prometheus-0 Running 10.17.2.249 prometheus-ps-prometheus-1 Running 10.17.3.241 prometheus-ps-prometheus-2 Running 10.17.1.6 ``` I have a **headless** service addresing the statefulset: ```yaml apiVersion: v1 kind: Service metadata: name: ptn namespace: prometheus-stack spec: clusterIP: None ports: [...] selector: app.kubernetes.io/name: prometheus prometheus: ps-prometheus ``` I am trying to access the statefulset pods using the service DNS. When I address the service DNS I get correct IPs of the pods, but when I try to access the pods using the IPs I get `error`: What am I doing wrong? How can I access the pods using the DNS? ```bash $ dig ptn.prometheus-stack.svc.cluster.local ; <<>> DiG 9.18.13 <<>> ptn.prometheus-stack.svc.cluster.local ;; global options: +cmd ;; Got answer: ;; WARNING: .local is reserved for Multicast DNS ;; You are currently testing what happens when an mDNS query is leaked to DNS ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43817 ;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 1232 ; COOKIE: ae0111659717eacd (echoed) ;; QUESTION SECTION: ;ptn.prometheus-stack.svc.cluster.local. IN A ;; ANSWER SECTION: ptn.prometheus-stack.svc.cluster.local. 30 IN A 10.17.1.6 ptn.prometheus-stack.svc.cluster.local. 30 IN A 10.17.3.241 ptn.prometheus-stack.svc.cluster.local. 30 IN A 10.17.2.249 ;; Query time: 4 msec ;; SERVER: 10.96.0.10#53(10.96.0.10) (UDP) ;; WHEN: Wed Jan 31 19:28:51 UTC 2024 ;; MSG SIZE rcvd: 241 ``` ```bash $ dig prometheus-ps-prometheus-0.ptn.prometheus-stack.svc.cluster.local ; <<>> DiG 9.18.13 <<>> prometheus-ps-prometheus-0.ptn.prometheus-stack.svc.cluster.local ;; global options: +cmd ;; Got answer: ;; WARNING: .local is reserved for Multicast DNS ;; You are currently testing what happens when an mDNS query is leaked to DNS ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 20283 ;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 1232 ; COOKIE: 7dfbe83a77ee64ef (echoed) ;; QUESTION SECTION: ;prometheus-ps-prometheus-0.ptn.prometheus-stack.svc.cluster.local. IN A ;; AUTHORITY SECTION: cluster.local. 10 IN SOA ns.dns.cluster.local. hostmaster.cluster.local. 1706727875 7200 1800 86400 30 ;; Query time: 0 msec ;; SERVER: 10.96.0.10#53(10.96.0.10) (UDP) ;; WHEN: Wed Jan 31 19:29:31 UTC 2024 ;; MSG SIZE rcvd: 199 ```
I have a JSON file with data, and I am looking to see if a participant has completed a lesson by looking to see if an object called targetTitle has "Exit" where object pTitle has any one of the following: "hard", "easy", "medium", "CD", "CH", "WU" for a given lesson. Currently, the code cannot differentiate between instances where "Exit" is present in all pTitle at least once for a participant in a given lesson and present at least once for a participant in a given lesson. Here is example data: ``` { "_id": "018e3sdafdsg810c478", "Score": 0, "Participant": "mailto:ExampleEmail@gmail.com", "Time": "2024-03-14T21:01:53.511Z", "Answer Details": { "pID": "ac82f2b7-842b-4f34-96e7-177324359390", "pTitle": "Part 1 Lesson6-Main0-10easy", "iID": "dsfds1201-7a94-41as-bsad-a6fdsf9f428", "targetID": "Rad_id_145527", "targetTitle": "2 Exit", "targetNote": "Choice #1", "sID": 0, "sub": false, "tI": 7443, "sessionID": "ltrq5nzr2gs723h3g0ld", "sessionTime": 7439, "r": "FA" } } ``` Here is my current python script: ``` import csv import json import re def check_participant(entry): participant = entry.get('Participant', 'Participant information missing') lessons_completed = {} # Check if 'Answer Details' key is present if 'Answer Details' in entry: answer_details = entry['Answer Details'] # Get 'pTitle' and 'targetTitle' if present p_title = answer_details.get('pTitle', 'pTitle information missing') target_title = answer_details.get('targetTitle', 'targetTitle information missing') # Extract lesson number using regex lesson_number_match = re.search(r'(Lesson|L)(\d+)', p_title) if lesson_number_match: lesson_number = f"Lesson {lesson_number_match.group(2)}" # Check conditions completed = False if any(keyword in p_title for keyword in ['WU', 'CH', 'CD', 'easy', 'medium', 'hard']) and 'Exit' in target_title: completed = True lessons_completed[lesson_number] = 'Yes' if completed else 'No' return participant, lessons_completed # Read data from JSON file with open('example test.json', 'r') as file: data = json.load(file) # Process data to create rows for each participant participant_data = {} for entry in data: participant, lessons_completed = check_participant(entry) if participant not in participant_data: participant_data[participant] = lessons_completed else: participant_data[participant].update(lessons_completed) # Specify the CSV file path csv_file_path = "Lesson Completed.csv" # Write data to the CSV file with open(csv_file_path, mode='w', newline='') as file: writer = csv.writer(file) # Write header header = ['Participant'] + list(next(iter(participant_data.values())).keys()) writer.writerow(header) # Write each row of data for each participant for participant, lessons in participant_data.items(): row = [participant] for lesson, completed in lessons.items(): row.append(completed) writer.writerow(row) print(f"CSV file '{csv_file_path}' has been created successfully.") ```
Identifying whether a condition given another condition for a given participant in a given lesson exists in a dataset
|python|json|extract|
null
I am using different numerical methods to understand the results yielded from different types of integrators at different time steps. I am comparing the performance of each integration method by calculating the Mean Absolute Error of the predicted energy with the analytical solution: `$$ MAE = \frac{1}{n} \sum_{i=0}^{n}\left | y_{analytical} - y_{numerical}\right| $$` Then for different time-steps I am calculating the resulting MAE and plotting the results in a log vs. log plot as shown below. [log (MAE) vs. log(Time_step)](https://i.stack.imgur.com/ozMkL.png) The relation between MAE and time-step matches my expectations (the Verlet Method scales quadratically and the Euler Cromer method scales linearly), but I am noticing that the Verlet method has a turning point at about 10^(-4) s. This seems slightly too large and I was expecting instead a turning point to arise at time-steps closer to 10^(-8) s as I am using numpy's float64, hence there are about 15 to 17 decimal places of precision. I went onto plot the maximum and minimum errors obtained for each time step (Excluding iteration 0 as those are the initial conditions which are the same for both numerical and analytical methods) and these are the results: [log (Max Err) vs. log(Time_step)](https://i.stack.imgur.com/Thyjd.png) [log (Min Err) vs. log(Time_step)](https://i.stack.imgur.com/dXxpA.png) Again when plotting the maximum error I obtain a minimum of similar value compared to the previous plot, but plotting the minimum obtained error (these always happened in the first few iterations after the initial conditions) I obtain that the errors seem to flatten out at 10^(-4) s and approach errors of about 10^(-15) J in the energy. Because of this flattening of the minimum errors, it makes sense that going further than 10^(-4) s does not increase the precision of the Verlet's method, but I cant explain why the maximum errors grow after this point. An explanation that comes to mind is the round off error cause by float64 that should happen when values reach about 10^(-15), 10^(-17). I have manually checked the position, velocity and acceleration that result from running the verlet method but their lowest values are of order 10^(-9), very far from 10^(-15). (1) Is it possible that I am introducing a round off error when I am calculating the residual error from the analytical and the verlet's method? (2) Are there other more appropriate ways of calculating the error? (I thought MAE was a good fit because the verlet method oscillates about the true system values) (3) Are there tweaks that could be done to show possible flaws within my analysis, I have looked at my code extensively and I am not able to find any bugs, furthermore, the Verlet method I coded does have an error which scales quadratically with time step which makes me think that the code itself is fine. (Maybe a possible attempt would be to use float128 and ensure its used throughout all calculations and then see if the above plots differ?) Thanks in advance for any help with the above questions
you can also initialise to empty,dealing in case of string or a boolean as false
I have a table called "Contact" where I have ID and Status of the contact , then I have another table called "Product" where I have the contact ID, product status and ProductID. I'm trying to find contacts which doesn't have any active product. How do I do that. I have used a query below , Select c.ID from Contact c where c.id not in (select p.contactid from product p where p.product_status='Active') but i'm also getting both ID 1234 and 1223. Where in theory I should only get 1223. How do I tweak my script to bring contacts which doesn't have any active product. Contact table [![Contact Table][1]][1] Product Table [![Product table][2]][2] [1]: https://i.stack.imgur.com/ZQ46m.png [2]: https://i.stack.imgur.com/nsRCE.png
Kuberenets statefull set's Stable Network ID
|kubernetes|dns|coredns|
null
My query ``` select custname, case when date < '11/26/2023' then -1 else datepart(wk,date) end 'week#', sum(amount) sales, count(salesid) orders from SalesTable inner join CustomerTable c on salestable.CustID=c.CustID where date < '1/27/2024' and c.CustID = 10285 or c.CustID = -2 group by c.custid,custname, [address],case when date < '11/26/2023' then -1 else datepart(wk,date) end, case when date < '11/26/2023' then '11/25/2023' else DATEADD(dd,7-(DATEPART(dw,date)),date) end order by 1,2 ``` Gets all customers sales (sum amount, week number, amount orders) by one week any row. like: | custname | week# | sales | orders | |---------|----------|----------|-----------| |CustAAA | -1 | 974697.41 | 62013 | |CustAAA |1| 10.01 | 5 | |CustAAA |2| 10 |2| |CustAAA |2| 372.95| 11| |CustAAA |3| 70.86| 13| |CustAAA |3| 0| 3| |CustAAA |4| 8.08| 2| |CustAAA |5| 20 |6| |CustAAA |48| 0 |38| |CustAAA |49 |84.27| 2| |CustXYZ |-1 |12.12| 1| |CustXYZ |1 |22.59| 1| |CustXYZ |4 |117.9| 1| |CustXYZ |48 |19.3| 1| [enter image description here](https://i.stack.imgur.com/3qC7j.png) How do I PIVOT one row per customer, 'week' number as column-\> amount, 'week' number as column -\> orders. and again the next week number, like example: [enter image description here](https://i.stack.imgur.com/Z8sHj.png)
"root element" is not really a thing here. The browser will see the component deeply embedded in the layout and page. You can make your own root: ```html <div class="root"> <div class="foo"> <div>Hello</div> <div>World</div> </div> <span class="foo2"> Second root element </span> </div> ``` and then apply a style to the 'root level' elements with ```css * > .root { margin: 1em; ... } ``` The [child combinator (>)][1] makes that it only applies to the direct children of a .root element. With the descendant combinator (just a space) the style aplies to all descendants. [1]: https://developer.mozilla.org/en-US/docs/Web/CSS/Child_combinator
While trying without Service. Everything works on the emulator, but not on Xiaomi. It may be possible to update using FLAG_UPDATE_CURRENT, and then there will be no need to cancel when starting a new intent. AlarmManagement ``` class AlarmManagement private constructor(/*private val context: Context*/) { companion object { private var instance: AlarmManagement? = null @JvmStatic fun getInstance(): AlarmManagement { if (instance == null) { instance = AlarmManagement() } return instance!! } } val alarmManager = MyApplication.getAppContext().getSystemService(Context.ALARM_SERVICE) as AlarmManager val database = Database.getInstance(MyApplication.getAppContext()) val exactAlarmSettingStrategy: ExactAlarmSettingStrategy = SetAlarmClock() fun setOrUpdateAlarm(clock: Clock) { fun clockValuesAreCorrectForAlarmSetting(): Boolean { if (!clock.isActive) return false if (clock.id == null) { Camp.log("error alarm", "A clock with id = null was passed for setting the alarm") return false } return true } if (!clockValuesAreCorrectForAlarmSetting()) return val requestCode = clock.id!!.toInt() cancelAlarmIntent(requestCode) val alarmIntent = Intent(MyApplication.getAppContext(), AlarmReceiver::class.java).apply { action = "ALARM" putExtra("clockId", clock.id) } if (!exactAlarmIsAllowed()) return val pendingIntent = PendingIntent.getBroadcast( MyApplication.getAppContext(), requestCode, alarmIntent, PendingIntent.FLAG_IMMUTABLE ) val calendar = java.util.Calendar.getInstance().apply { set(Calendar.HOUR_OF_DAY, clock.triggeringHour) set(Calendar.MINUTE, clock.triggeringMinute) } if (calendar.timeInMillis <= System.currentTimeMillis()) { calendar.add(Calendar.DAY_OF_YEAR, 1) } exactAlarmSettingStrategy.setExactAlarm(alarmManager, pendingIntent, calendar) Camp.log("alarm", "AlarmClock was set for $clock") } fun exactAlarmIsAllowed(): Boolean { if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.S) { return if (alarmManager.canScheduleExactAlarms()) { true } else { Camp.log( "alarm info", "Check result: permission for scheduling exact alarms is missing" ) false } } else { Camp.log( "alarm info", "Check result: permission for scheduling exact alarms is present" ) return true } } fun activateInactiveAlarmIntentsWhoseClocksAreActive() { val clocks = database.getAllClocks() clocks.forEach { setOrUpdateAlarm(it) } } fun cancelAlarmIntent(intentRequestCode: Int) { val intent = Intent(MyApplication.getAppContext(), AlarmReceiver::class.java) val pendingIntent = PendingIntent.getService(MyApplication.getAppContext(), intentRequestCode, intent, PendingIntent.FLAG_NO_CREATE) if (pendingIntent != null) { alarmManager.cancel(pendingIntent) } } } ``` Receiver ``` package com.camporation.passwordalarmclock import android.content.BroadcastReceiver import android.content.Context import android.content.Intent import java.util.Calendar class AlarmReceiver : BroadcastReceiver() { lateinit var triggeringClock: Clock val intentProcessingCompletionMessage = "Intent processing is complete. " override fun onReceive(context: Context, intent: Intent) { Camp.log("AlarmReceiver received intent") if (intent.action == "ALARM") { Camp.log("alarm","AlarmReceiver received alarm intent") val calendar = Calendar.getInstance() val today = calendar.get(Calendar.DAY_OF_WEEK) val database = Database.getInstance(MyApplication.getAppContext()) fun intentProcessing() { fun gettingClock(): Clock? { val intentClockId = intent.getSerializableExtra("clockId") as Long? if (intentClockId == null) { Camp.log("error alarm", "Intent clock id == null") return null } val dbClock = database.getClockById(intentClockId) if (dbClock == null) { Camp.log("error alarm", "Clock with the same id as intentClock was not found") return null } return dbClock } val clock = gettingClock() if (clock == null) { Camp.log( "error alarm", "An error occurred while obtaining the alarm object. $intentProcessingCompletionMessage" ) return } if(!clock.isActive){ Camp.log( "error alarm", "For an unforeseen reason, an inactive alarm went off. The intent of this alarm is canceled and will not trigger anymore. $intentProcessingCompletionMessage" ) return } fun todayIsRightDayForAlarm(): Boolean { if (clock.alarmRepeatingMode == AlarmRepeatingMode.ONETIME || clock.alarmRepeatingMode == AlarmRepeatingMode.EVERYDAY) { return true } if (clock.alarmRepeatingMode == AlarmRepeatingMode.SELECTDAYS) { return (clock.triggeringWeekDays?.and((1 shl (today - 1)))) != 0 } Camp.log( "error alarm", "Checking the day of the week for triggering this type of alarm was not provided" ) return false } if (!todayIsRightDayForAlarm()) { Camp.log( "Alarm info", "The current day of the week does not match the triggering days of the accepted alarm. $intentProcessingCompletionMessage" ) return } fun updateClockInDatabase(){ when (clock.alarmRepeatingMode) { AlarmRepeatingMode.ONETIME -> { clock.isActive = false val idOrResult = database.insertOrUpdateClock_IdOrResult(clock) Camp.log("One-time alarm modified: isActive = false. It is added to the database with id/result: $idOrResult") } AlarmRepeatingMode.EVERYDAY, AlarmRepeatingMode.SELECTDAYS -> { // currently not required for these types } } } updateClockInDatabase() MainActivity.currentActivity?.onDatabaseUpdatingInReceiver() AlarmManagement.getInstance().activateInactiveAlarmIntentsWhoseClocksAreActive() Camp.log("alarm allAlarms receiver","The Receiver activated all inactive intents with active alarms") fun activateAlarm() { Camp.log("Alarm info", "Triggering the alarm. Starting the LockScreenActivity. $intentProcessingCompletionMessage") MyApplication.getInstance().startLockScreenActivity(clock) } activateAlarm() return } intentProcessing() } if (intent.action == "android.intent.action.BOOT_COMPLETED") { AlarmManagement.getInstance().activateInactiveAlarmIntentsWhoseClocksAreActive() } } } ``` ExactAlarmSetting ``` interface ExactAlarmSettingStrategy { fun setExactAlarm(alarmManager: AlarmManager, pendingIntent: PendingIntent, calendar: Calendar) } class SetAlarmClock : ExactAlarmSettingStrategy { override fun setExactAlarm( alarmManager: AlarmManager, pendingIntent: PendingIntent, calendar: Calendar ) { val info = AlarmManager.AlarmClockInfo(calendar.timeInMillis, pendingIntent) alarmManager.setAlarmClock(info, pendingIntent) } } class SetExactAndAllowWhileIdle : ExactAlarmSettingStrategy { override fun setExactAlarm( alarmManager: AlarmManager, pendingIntent: PendingIntent, calendar: Calendar ) { alarmManager.setExactAndAllowWhileIdle( AlarmManager.RTC_WAKEUP, calendar.timeInMillis, pendingIntent ) } } ``` Manifest xml ``` <!-- For using ExactAlarms and SetAlarmClock --> <!-- https://developer.android.com/develop/background-work/services/alarms/schedule#exact-permission-declare --> <!-- Requires either SCHEDULE_EXACT_ALARM or USE_EXACT_ALARM --> <!-- Check permission existence using canScheduleExactAlarms() --> <!-- Provided manually for Android 12 and above --> <!-- May be revoked by the user --> <uses-permission android:name="android.permission.SCHEDULE_EXACT_ALARM"/> <!-- Provided automatically, available from Android 13 and above --> <!-- Cannot be revoked by the user --> <!-- Cannot use (not critical), as working with earlier Android versions --> <!-- <uses-permission android:name="android.permission.USE_EXACT_ALARM"/> --> <uses-permission android:name="android.permission.REQUEST_IGNORE_BATTERY_OPTIMIZATIONS"/> <receiver android:name=".AlarmReceiver" android:exported="false"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED"/> <category android:name="android.intent.category.DEFAULT"/> </intent-filter> </receiver> ```
I hade made two Pinia stores in my projects and my stores finished work. Before when the pinia was created first automatically all work good. You can see that WebStorm don't see STEP property. I don't know where is problem- in Pinia and Vue or in WebStorm. I will glad to heard any variants for solve that. [![enter image description here][1]][1] Below you can see my first store and it works well, but I don't see property in my Idea too. [![enter image description here][2]][2] I have tried updating WebStorm, but it doesn't get correct result. Maybe someone know what happened? In those examples I'm trying to get access to `step` property and `users`. [1]: https://i.stack.imgur.com/ucIEy.jpg [2]: https://i.stack.imgur.com/Oxvhy.jpg
I can't access the data inside one file/store
- `For nRow = 2 To nLastRow` the cells (from row 2 to row 7) are blank in Col A. There is a `blank` item in the `Dict` object. `objSheet.Name = varColumnValue` raises runtime error if `varColumnValue=""`. - `nRow = 2` is used in your code. I guess there is a header line. Change the start number of `For` loop to 9. ```vb For nRow = 9 To nLastRow strColumnValue = objWorksheet.Range("A" & nRow).Value If objDictionary.Exists(strColumnValue) = False Then objDictionary.Add strColumnValue, 1 End If Next ``` --- Update: ```vb Option Explicit Sub SplitSheetIntoMultipleWorkbooksBasedOnColumn() Dim objWorksheet As Worksheet Dim nLastRow As Long, nRow As Long Dim nColCnt As Long, rowRng As Range Dim strColValue As String, savePath As String Dim objDic As Object, i As Long Dim varColValues As Variant Dim varColValue As Variant Dim objExcelWorkbook As Workbook Dim objSheet As Worksheet Const H_ROW = 8 ' header row# Set objWorksheet = ActiveSheet With objWorksheet nLastRow = .Range("A" & .Rows.Count).End(xlUp).Row nColCnt = .Cells(.Columns.Count, 1).End(xlToLeft).Column Set objDic = CreateObject("Scripting.Dictionary") ' Loop through data For nRow = H_ROW + 1 To nLastRow strColValue = Trim(.Range("B" & nRow).Value) If Len(strColValue) > 0 Then ' Store data range in Dict Set rowRng = .Cells(nRow, 1).Resize(1, nColCnt) If objDic.Exists(strColValue) Then Set objDic(strColValue) = Union(rowRng, objDic(strColValue)) Else Set objDic(strColValue) = rowRng End If End If Next End With varColValues = objDic.Keys For i = LBound(varColValues) To UBound(varColValues) varColValue = varColValues(i) Set objExcelWorkbook = Workbooks.Add Set objSheet = objExcelWorkbook.Sheets(1) objSheet.Name = objWorksheet.Name ' Copy header and above rows objWorksheet.Rows("1:" & H_ROW).Copy objSheet.Range("A1") ' Copy data rows objDic(varColValue).Copy objSheet.Cells(H_ROW + 1, 1) ' Save the new workbook in a specific location savePath = "H:\01 - Merit\2024 Merit\Merit Spreadsheets\AP Copies\" ' Specify the save path here objExcelWorkbook.SaveAs savePath & varColValue & ".xlsx" objExcelWorkbook.Close Next End Sub ```
I want to see my GUI interface immediately and initialize the application on an asynchronous thread. For this I am using `QFuture`. My code: ``` // main.cpp int main(int argc, char *argv[]) { // create config and credentials QApplication app(argc, argv); LoginForm loginForm(config, credentials); int result = app.exec(); return result; } ``` Then in the login form I am using `QFuture`: ``` void initLauncher(const Aws::Client::ClientConfiguration &config, const Aws::Auth::AWSCredentials &credentials) { // slow function } LoginForm::LoginForm(const Aws::Client::ClientConfiguration &config, const Aws::Auth::AWSCredentials &credentials, QWidget *parent) : QWidget(parent), awsConfig(config), awsCredentials(credentials) { ... QFuture<void> future = QtConcurrent::run([this, &config, &credentials]() { initLauncher(config, credentials); }); QFutureWatcher<void> *watcher = new QFutureWatcher<void>(this); connect(watcher, SIGNAL(finished()), this, SLOT(initializationFinished())); // delete the watcher when finished too connect(watcher, SIGNAL(finished()), watcher, SLOT(deleteLater())); watcher->setFuture(future); // creating buttons show(); ``` Where `LoginForm` is the `QWidget` class. **Problem** -- I see the GUI interface only after the `initLauncher()` function is done. How can I show the GUI interface before `initLauncher()`?
I would like to re-map the "remove current line" shortcut (Ctrl+L) in Notepad++ to be the same as Visual Studio (Ctrl+X), but I can't find an entry for it in the Shortcut mapper. Is this possible? Reason: I want my right hand free for the numeric keypad, Ctrl+L is too long a stretch for my left hand. I can't even find a menu item corresponding to remove current line, which might explain why it's not in the mapper. Seems a bit of an ooversight as this is a very common function. I am using v8.6.2 (also checked earlier versions and seems to be missing there too).
Re-map remove current line shortcut in notepad++
|notepad++|
|android|google-play|
null
SwiftUI is smart enough to know which parts of the view need to be redrawn when observed data changes. So assigning an updated version of a `Post` object to the same index the `posts` array, will only cause the views that depend on the changed data to be rerendered. ``` class PostManager: ObservableObject { @Published var posts = [Post]() // ... func update(with newPost: Post) { if let postIndex = posts.firstIndex(where: {$0.id == newPost.id}) { posts[postIndex] = newPost } } } ``` If the `newPost` contains only an updated message, views that display that message will rerender. If `newPost` is an exact copy of the original post, nothing will be updated. With this model, actually retrieving updated `Post` data can be done anywhere, as long as you pass the data back into the `update(with:)` method when it's received. It's hard to say how exactly without knowing too much about your project, but at least the api calls are now separated from your manager. For example, using an dependency injection pattern would look something like this: ``` public protocol API: Sendable { // ... func updatePost(withId: Post.ID, data: Post) -> Post } class PostManager: ObservableObject { func update(with newPost: Post, on api: some API) { let updatedPost = api.updatePost(withId: newPost.id, data: newPost) if let postIndex = posts.firstIndex(where: {$0.id == newPost.id}) { posts[postIndex] = updatedPost } } } ``` There is a bit of redundancy here because as it stands, the data to send over API for updates is of the same type as the data used in your views (both are just the `Post` struct). But as the scope of your project increases, you may want to separate those (by having, for example, a `Post.Model` struct that is `Codable`, freeing the `Post` struct of that restriction).
I wanna run scheduled job on one of my api. All of my api are authenticated that means they expect a Bearer token as Authorization, which is dynamically generated by firebase auth everytime we hit the api from the app. Suppose I wanna run `app.get("/hi", fn(req, res))`, lets say 1 times a day. How to run it? Currently I tried using google cloud scheduler and google cron jobs. But the problem with both of them is that they cannot carry dynamic headers with them. Is there a way I can generate dynamic bearer auth token and send it as header in the cron job request. Or can service account be useful here? or is there any alternative other than cron jobs and google cloud scheduler?
Run scheduler on authenticated api
|node.js|google-cloud-platform|firebase-authentication|cron|google-cloud-scheduler|
null
% Initialize daytime and nighttime matrices with the correct size daytime_matrix = zeros(size(ST_data, 1), size(ST_data, 2), num_hours); nighttime_matrix = zeros(size(ST_data, 1), size(ST_data, 2), num_hours); % Loop through each day for day = 1:num_days % Calculate indices for daytime and nighttime hours daytime_indices = mod(daytime_start-1, num_hours_per_day) + 1; % Wrap around to beginning of day if needed nighttime_indices = mod(nighttime_start-1, num_hours_per_day) + 1; % Wrap around to beginning of day if needed % Update daytime and nighttime matrices daytime_matrix(:,:,daytime_indices) = ST_data(:,:,daytime_indices); nighttime_matrix(:,:,nighttime_indices) = ST_data(:,:,nighttime_indices); % Update start timesteps for the next day daytime_start = daytime_start + num_hours_per_day; nighttime_start = nighttime_start + num_hours_per_day; end % Remove trailing zeros from matrices to ensure correct size daytime_matrix = daytime_matrix(:,:,1:num_hours); nighttime_matrix = nighttime_matrix(:,:,1:num_hours);
What indices are you willing to calculate? If you want to find Mean Phylogenetic Distance (MPD) or Mean Nearest Taxon Distance (MNTD), you can use package `picante`. ses.mpd(comm, phy.dist, null.model = "richness", abundance.weighted = FALSE, runs = 999) The `runs` comand is the number of random communities generated. Hope it be useful for you.
|java|spring-mvc|session|tomcat|redis|
I have to write a Vue3 component for a website which requires authentication via Active Directory. The server is Apache, the authentication is done with GSSAPI. The endpoint is a Laravel 10 API which works perfectly on my local machine (without the authentication). In Vue I try to send data to the API using fetch(). This fails because of the missing authentication. I always get a 401. Here's the code snippet for fetch: ``` fetch(url, { method: "POST", headers: {"Content-Type": "application/json", "Accept": "application/json"}, body: JSON.stringify({ pageid: pageid, template: sectionTemplate.value, order: order, user: user, }) }) ``` I used Insomnia to test it with and without authentication. When I do it with the Vue component the entries in the log file look the same, they only also contain the referrer page and the browser's user agent. Here's an example of what the access.log looks like. ``` 12.34.56.78 - - [26/Mar/2024:13:52:35 +0100] "POST /api/section HTTP/1.1" 401 671 "-" "-" ``` With authentication in Insomnia enabled I get: ``` 12.34.56.78 - AD-Username [26/Mar/2024:13:53:31 +0100] "POST /api/section HTTP/1.1" 200 320 "-" "-" ``` How can I solve the problem? What do I need to send with my request in Vue to tell the webserver that I'm already authenticated?
|oracle-database|database-administration|impdp|
I was using pytorch function `torch.nn.parallel.DistributedDataParallel` to run batches on multi-gpu. However, I found the output after `torch.distributed.all_gather(gather_outputs, out)` is wrong. for example, the data order is 0,1,2,3, but the output order is 0,1,3,2. here is my code ``` model = torch.nn.parallel.DistributedDataParallel(model, broadcast_buffers=False) outputs = [] for inp, label in test_loader: inp = inp.cuda() label = label.cuda() out = model(inp) gather_outputs = [torch.zeros_like(out) for _ in range(torch.distributed.get_world_size())] torch.distributed.all_gather(gather_outputs, out) # not ordered outputs.extend(gather_outputs) ```
pytorch all_gather gives wrong output order
|pytorch|distributed|
null
I am using systemc to model a packet-based network. There are two parallel threads, which process random size packet. After each packet, I'd like to check whether the other thread happens to be in the same status (packet boundary) to avoid conflict. Any idea how to detect such event concurrency?
systemc: how to detect two events happen at the same time
|events|systemc|
I have a dataframe, df1 which is data manipulated from an imported excel, and I need to export it back out to have specific formatting that the original file had - it has many columns, but essentially if the 'Subproject' column has NA1 in it, that whole row should be Yellow (#FFFF00). If 'Subporject' column has a code starting SE followed by any number (i.e. could be SE235, or SE062) the whole row should be red (#FF0000). Also, if in the Sample_Thaws column there is any number other than 0 (i.e. 1 to 10), and the code in the 'Subproject' column starts with RE and not SE, the whole row should be blue (#0099FF). I can export df1 without the formatting, but don't know how to put it in, ive been trying to use openxlsx library. Or if possible, is there a way when reading in the excel to read the row colours and give an annotation column of the row colours such that i could use excels conditional formatting after export to put the colours back in. The code below is what I've been trying to do, but when I use the saveWorkbook(), no file is created and I'm unsure why. I also didn't add an argument for the fact that rows in blue need the Subproject code to start with RE. Sorry this is a bit long! ``` # #read in the original data > df1 <- read.xlsx("1946_P2_master.xlsx") # #(I then did the data manipulation and assigned the dataframe back to df1) # Create a workbook > wb <- createWorkbook() # #we need colors such that SE in Subproject column gives a red colour, or NA1 in the same column gives yellow, and any number but 0 in samplethaws gives blue, > yellow_rows <- which(df1$Subproject == "NA1") > red_rows <- which(grepl("^SE\\d+", df1$Subproject)) > blue_rows <- which(df1$Sample_Thaws != 0) # Add a worksheet > addWorksheet(wb, "Sheet1") # Write data to the worksheet > writeData(wb, "Sheet1", df1) # Create styles for yellow, red, and blue > yellow_style <- createStyle(fgFill = "#FFFF00") > red_style <- createStyle(fgFill = "#FF0000") > blue_style <- createStyle(fgFill = "#0099FF") # Apply styles to the respective rows > styles_and_rows <- list( > list(style = yellow_style, rows = yellow_rows), > list(style = red_style, rows = red_rows), > list(style = blue_style, rows = blue_rows) > ) # Loop through the list of styles and rows > for (style_row_pair in styles_and_rows) { > style <- style_row_pair$style > rows <- style_row_pair$rows # Check if rows are not empty, Apply style to each row > if (length(rows) > 0) { > for (row in rows) { > addStyle(wb, sheet = "Sheet1", style = style, rows = row + 1, cols = 1:ncol(df1)) > } > } > } # Write the dataframe with applied styles to an Excel file > saveWorkbook(wb, "formatted_data.xlsx") ``` dput(head(df1)) returns the following (example data): ```structure(list(Sample_ID = c("3330_534-20210403 RE277.4 ", "3330_534-20210403 RE278.2 1 of 15", "3330_534-20210403 RE278.2 2 of 15", "3330_534-20210403 RE278.2 3 of 15", "3330_534-20210403 RE278.2 4 of 15"), Sample_Project_Code = c("3330_", "3330_", "3330_", "3330_", "3330_"), Sample_Original_ID = c("534-20210403", "534-20210403", "534-20210403", "534-20210403", "534-20210403" ), Sample_Part = c(NA, "1 of 15", "2 of 15", "3 of 15", "4 of 15" ), Original_Batch_Code = c("RE277.4", "RE278.2", "RE278.2", "RE278.2", "RE278.2"), Subproject = c("RE277.4", "RE278.2", "RE278.2", "RE278.2", "RE278.2"), LastAction = c(NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), Date_Of_Import = structure(c(18720, 18720, 18720, 18720, 18720), class = "Date"), ItemType = c("A", "B", "B", "B", "B"), Sample_Container_Type = c("card", "2ml tube ", "2ml tube ", "2ml tube ", "2ml tube "), Sample_Thaws = c(0, 0, 0, 0, 0), Age = c(NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), Subject_Sex = c("M", "M", "M", "M", "M"), named_person = c("Kay", "Kay", "Kay", "Kay", "Kay"), Researcher = c("Bee", "Bee", "Bee", "Bee", "Bee"), Technician = c("Jay", "Jay", "Jay", "Jay", "Jay"), Identifier = c("ACR", "ACR", "ACR", "ACR", "ACR"), PPL_Sender = c("Bee", "Bee", "Bee", "Bee", "Bee" ), Extraction_Date = structure(c(18720, 18720, 18720, 18720, 18720), class = "Date"), Sample_PLPI = c(340, 65, 65, 65, 65), Date_Sent = structure(c(18720, 18720, 18720, 18720, 18720 ), class = "Date"), Date_Received = structure(c(18720, 18720, 18720, 18720, 18720), class = "Date"), D = c(NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), Ethics_Code = c("33/333/3", "33/333/4", "33/333/5", "33/333/6", "33/333/7"), Sample_Volume = c(NA, 500, 500, 500, 500), Method = c(NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), Comments = c(NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), Row = c(20, 34, 34, 35, 35), Col = c("A", "K", "L", "A", "B"), Level5Name = c("Book 18", "Shelf 14", "Shelf 14", "Shelf 14", "Shelf 14"), Level4Name = c("Compartment B", "Compartment E", "Compartment E", "Compartment E", "Compartment E" ), Level3Name = c("Freezer 8", "Freezer 7", "Freezer 7", "Freezer 7", "Freezer 7"), Level2Name = c("Ground Floor", "Ground Floor", "Ground Floor", "Ground Floor", "Ground Floor" ), Level1Name = c("AH", "AH", "AH", "AH", "AH")), row.names = c(NA, 5L), class = "data.frame")```
Indeed the solution with print_r with true is the simplest solution. But I would do: $ret = htmlentities( print_r( $some_array, true ) ) $ret = str_replace( array("\n"), array('<br>'), $ret ); printf( "<br>Result is: <br>%s<br>", $ret ); But that is up-to you-all.
You can use a flag, a rookie mistake is to exit a single loop, remember the double break, another practice that is not recommended but necessary in very high performance processes is to use goto and my favorite, a function with a return. #include <iostream> // Dont clone vars, use references, for performance int calc(int &row, int &cols, int &max) { int sum = 0; for (int i = 0; i < row; i++) { for (int j = 0; j < cols; j++) { sum += i + j; if (sum > max) { return sum; } } } return sum; } int main() { int rows = 1000; int cols = 1000; int max = 1000; int result = calc(rows, cols, max); std::cout << "Result: " << result; return 0; } alternative version, a little more modular: #include <iostream> // Dont clone vars, use references, for performance void calc(int &row, int &cols, int &max, int &result) { int sum = 0; for (int i = 0; i < row; i++) { for (int j = 0; j < cols; j++) { sum += i + j; if (sum > max) { result = sum; return; } } } result = sum; } int main() { int rows = 1000; int cols = 1000; int max = 1000; int result = 0; calc(rows, cols, max, result); std::cout << "Result: " << result; return 0; }
I have a custom function that splits my data into training and testing sets based on various criteria and rules. I'd like to use this function in a tidymodels workflow together with `fit_resamples`. However, when I can make my list look like a list made with `vfold_cv`, it does not seem to work. The example code I am using: ``` data(ames, package = "modeldata") split_data <- function(df, n) { set.seed(123) # for reproducibility df$id <- seq.int(nrow(df)) list_of_splits <- list() for(i in 1:n) { train_index <- sample(df$id, size=ceiling(nrow(df)*.8)) train_set <- df[train_index,] test_set <- df[-train_index,] list_of_splits[[i]] <- list(train_set = train_set, test_set = test_set) } return(list_of_splits) } splits <- split_data(ames, 5) resamples <- map(splits, ~rsample::make_splits( x = .$train_set |> select(colnames(.$test_set)), assessment = .$test_set )) names(resamples) <- paste0("Fold", seq_along(resamples)) resamples <- tibble::tibble(splits = resamples, id = names(resamples)) lm_model <- linear_reg() %>% set_engine("lm") lm_wflow <- workflow() %>% add_model(lm_model) %>% add_formula(Sale_Price ~ Longitude + Latitude) res <- lm_wflow %>% fit_resamples(resamples = resamples) ``` The error returned after running that last line is: ``` Error in `check_rset()`: ! The `resamples` argument should be an 'rset' object, such as the type produced by `vfold_cv()` or other 'rsample' functions. ``` If I try to force the class to be "rset" `class(resamples) <- "rset"`, the list no longer looks correct and I get the same error. What is the correct method of using a custom crossfold data set? Note - additional question: In the example code above, the test and training set size is consistent across folds. In my actual data, this will vary slightly - does this matter at all? ## Solution based on answer below: ``` data(ames, package = "modeldata") split_data <- function(df, n) { set.seed(123) # for reproducibility df$id <- seq.int(nrow(df)) list_of_splits <- list() for(i in 1:n) { train_index <- sample(df$id, size=ceiling(nrow(df)*.8)) train_set <- df[train_index,] test_set <- df[-train_index,] list_of_splits[[i]] <- list(train_set = train_set, test_set = test_set) } return(list_of_splits) } splits <- split_data(ames, 5) resamples <- map(splits, ~list( analysis = .$train_set |> select(colnames(.$test_set)) |> pull(id), assessment = .$test_set$id )) splits <- lapply(resamples, make_splits, data = ames) final_split <- manual_rset(splits, paste("Split", seq(1:5))) lm_model <- linear_reg() %>% set_engine("lm") lm_wflow <- workflow() %>% add_model(lm_model) %>% add_formula(Sale_Price ~ Longitude + Latitude) res <- lm_wflow %>% fit_resamples(resamples = final_split) collect_metrics(res) ```
To add quotation marks to the start and end of every line with regular expressions in Xcode, replace `^.*$` (i.e., `^` is the start of the line, `.*` is zero or more of any character, and `$` is the end of the line) with `"$0"` (i.e., a quotation mark, followed by capture group zero, followed by a final quotation mark): [![enter image description here][1]][1] You can also use multi-cursors. You can, for example, hold down the <kbd>⌥</kbd> key and click-drag with your mouse, and then hit <kbd>⌘</kbd>-<kbd>◀︎</kbd> to go to the start of the line, <kbd>"</kbd> for the opening quotation mark, <kbd>⌘</kbd>-<kbd>▶︎</kbd> to go to the end of the line, and <kbd>"</kbd> for the closing quotation mark. [![enter image description here][2]][2] And while you can hold <kbd>⌥</kbd> and and click-drag to make a bunch of multi-cursors, you can toggle individual ones on and off with <kbd>⇧</kbd>-<kbd>⌃</kbd>-clicks. [1]: https://i.stack.imgur.com/avDga.gif [2]: https://i.stack.imgur.com/uhL0h.gif
Vue3: API-Calls to a Laravel API on Apache server with Active Directory authentication
|laravel|vuejs3|
null
You probably already found a solution to name the .ics file. Didn't find an specific answer for .ics files, so here is an answer for someone still looking for a solution to this question (this at least works on chrome and firefox, haven't tested it on other browsers). Found the solution on this post: [here][1] In the selected answer Jeremy explains that you should use an invisible link/anchor. That way you can name the .ics file. In my case it meant this (I used ajax to create an .ics formatted string): var uri = 'data:text/csv;charset=utf-8,' + escape([.ics formatted string]); var downloadLink = document.createElement('a'); downloadLink.href = uri; downloadLink.download = 'yourfilename.ics'; //<=== Set here your file name downloadLink.style = 'display: none;'; document.body.appendChild(downloadLink); downloadLink.click(); document.body.removeChild(downloadLink); Hopefully it helps! [1]: https://stackoverflow.com/questions/7034754/how-to-set-a-file-name-using-window-open
I am using different numerical methods to understand the results yielded from different types of integrators at different time steps. I am comparing the performance of each integration method by calculating the Mean Absolute Error of the predicted energy with the analytical solution: `$$ MAE = \frac{1}{n} \sum_{i=0}^{n}\left | y_{analytical} - y_{numerical}\right| $$` Then for different time-steps I am calculating the resulting MAE and plotting the results in a log vs. log plot as shown below. [log (MAE) vs. log(Time_step)](https://i.stack.imgur.com/ozMkL.png) The relation between MAE and time-step matches my expectations (the Verlet Method scales quadratically and the Euler Cromer method scales linearly), but I am noticing that the Verlet method has a turning point at about 10^(-4) s. This seems slightly too large and I was expecting instead a turning point to arise at time-steps closer to 10^(-8) s as I am using numpy's float64, hence there are about 15 to 17 decimal places of precision. I went onto plot the maximum and minimum errors obtained for each time step (Excluding iteration 0 as those are the initial conditions which are the same for both numerical and analytical methods) and these are the results: [log (Min Err) vs. log(Time_step)](https://i.stack.imgur.com/Thyjd.png) [log (Max Err) vs. log(Time_step)](https://i.stack.imgur.com/dXxpA.png) Again when plotting the maximum error I obtain a minimum of similar value compared to the previous plot, but plotting the minimum obtained error (these always happened in the first few iterations after the initial conditions) I obtain that the errors seem to flatten out at 10^(-4) s and approach errors of about 10^(-15) J in the energy. Because of this flattening of the minimum errors, it makes sense that going further than 10^(-4) s does not increase the precision of the Verlet's method, but I cant explain why the maximum errors grow after this point. An explanation that comes to mind is the round off error cause by float64 that should happen when values reach about 10^(-15), 10^(-17). I have manually checked the position, velocity and acceleration that result from running the verlet method but their lowest values are of order 10^(-9), very far from 10^(-15). (1) Is it possible that I am introducing a round off error when I am calculating the residual error from the analytical and the verlet's method? (2) Are there other more appropriate ways of calculating the error? (I thought MAE was a good fit because the verlet method oscillates about the true system values) (3) Are there tweaks that could be done to show possible flaws within my analysis, I have looked at my code extensively and I am not able to find any bugs, furthermore, the Verlet method I coded does have an error which scales quadratically with time step which makes me think that the code itself is fine. (Maybe a possible attempt would be to use float128 and ensure its used throughout all calculations and then see if the above plots differ?) Thanks in advance for any help with the above questions
When I publish my application, there is no problem on some phones and compatibility problems occur on some phones. I shared the manifest and gradle part below, I didn't understand what I did wrong. There is no problem in the Play develoepr device catalogue. ![Error Image](https://i.stack.imgur.com/FRPK8.png) Manifest.xml ``` <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" > <uses-permission android:name="android.permission.VIBRATE" /> <application android:allowBackup="true" android:configChanges="locale|orientation" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:supportsRtl="true" android:theme="@style/AppTheme.NoActionBar" android:localeConfig="@xml/locales_config" tools:targetApi="tiramisu"> <activity android:name=".MainActivity" android:configChanges="locale|orientation" android:exported="true" android:screenOrientation="fullSensor" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <service android:name="androidx.appcompat.app.AppLocalesMetadataHolderService" android:enabled="false" android:exported="false"> <meta-data android:name="autoStoreLocales" android:value="true" /> </service> <meta-data android:name="com.google.android.gms.ads.APPLICATION_ID" android:value="ca-app-pub-9937060478156830~6086699458" /> <meta-data android:name="com.google.android.gms.games.APP_ID" android:value="@string/app_id"/> </application> </manifest> ``` build.gradle ``` plugins { id 'com.android.application' id 'com.google.gms.google-services' } android { namespace 'com.word.lingo' compileSdk 34 defaultConfig { applicationId "com.word.lingo" minSdk 21 //noinspection EditedTargetSdkVersion targetSdk 34 versionCode 21 versionName "1.0.0" testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" resourceConfigurations += ["en", "tr", "fr", "de","es","it"] } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' } } compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } buildFeatures{ viewBinding true } androidResources { // generateLocaleConfig true } bundle { language { enableSplit = false } } } dependencies { implementation 'androidx.appcompat:appcompat:1.6.1' implementation 'com.google.android.material:material:1.11.0' implementation 'androidx.constraintlayout:constraintlayout:2.1.4' testImplementation 'junit:junit:4.13.2' androidTestImplementation 'androidx.test.ext:junit:1.1.5' androidTestImplementation 'androidx.test.espresso:espresso-core:3.5.1' implementation 'nl.dionsegijn:konfetti-compose:2.0.4' implementation 'nl.dionsegijn:konfetti-xml:2.0.4' implementation 'com.github.mmoamenn:LuckyWheel_Android:0.3.0' implementation 'com.google.android.gms:play-services-ads:23.0.0' implementation "com.android.billingclient:billing:6.2.0" implementation 'com.google.firebase:firebase-messaging:23.4.1' implementation 'com.anjlab.android.iab.v3:library:2.0.3' implementation 'com.google.android.play:integrity:1.3.0' implementation "com.google.android.gms:play-services-games-v2:19.0.0" implementation platform('com.google.firebase:firebase-bom:32.7.4') implementation 'com.google.firebase:firebase-analytics' } ``` What did you try and what were you expecting?
The agggregate shouldn't care about the table structure. That (and the requirement that the aggregate is a unit of consistency) ultimately means that the third approach is the way. > The StudentRepository needs access to the internal list of CourseParticipation objects, which would otherwise not be exposed The list of course participations is part of the `Student`. Saving the student implies saving the participations. > The save method can become very complex if the aggregate maps to many database tables > Avoiding unnecessary database calls (i.e., saving unchanged entries) can be hard Them's the breaks, unfortunately. It is basically the case that DDD aggregates assume an object store (or an event store, if event sourcing) in the sense that that abstraction matches the minimum functionality required for saving/retrieving an aggregate. A relational schema, especially one which aspires to higher normal forms, is going to introduce overhead/complexity.
You can loop for each value of the enum and then join the elements using the character desired. Something like this: var enumList = Enum.GetValues(typeof(MyEnum)).OfType<MyEnum>().Select(x => nameof(MyEnum) + "=" + x.ToString()).ToList(); string enumListStr = string.Join("&", enumList); Hope it helps.
That's not how you cycle through a variable. You should do: {% for wc in response %} {{ wc }} {% endfor %} By doing `{{ response.wc }}` Django will try to do three things: a lookup for `response["wc"]`, a call to `response.wc()` and a list lookup `response[wc]` where `wc` is an index. Depending on your code behind, it could change the `response` object.
null
I am new to Avalonia and also have never used WPF before, so I also don't know how it works there. I would like to display and edit a DataGrid in Avalonia. Displaying items in the DataGrid works, but it is not editable, meaning that I do not even get the possibility of changing values in the GUI (I cannot, for example, change the state of a checkbox). If I change the DataGrid to an ItemsControl, it becomes editable. What do I need to change to make it editable? This is my code: View.xaml: <StackPanel Orientation="Horizontal"> <!-- not working, is not editable <DataGrid ItemsSource="{Binding SpectrometerList, Mode=TwoWay}" GridLinesVisibility="All" AutoGenerateColumns="True" BorderThickness="1" BorderBrush="Gray" IsReadOnly="False"> </DataGrid> --> <!-- Is editable --> <ItemsControl ItemsSource="{Binding SpectrometerList}"> <ItemsControl.ItemTemplate> <DataTemplate> <CheckBox Margin="4" IsChecked="{Binding X}" Content="{Binding SerialNumber}"/> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> --> <StackPanel Orientation="Vertical"> <Button>1</Button> <Button>2</Button> </StackPanel> </StackPanel> ViewModel.cs: public ObservableCollection<Spectrometer> SpectrometerList { get; set; } public SpectrometerViewModel() { SpectrometerList = new ObservableCollection<Spectrometer>(Spectrometer.GetSpectrometers()); } Model.cs: public class Spectrometer { public byte ID { get; set; } public string SerialNumber { get; set; } = string.Empty; public byte Reactor { get; set; } public bool X { get; set; } public static IEnumerable<Spectrometer> GetSpectrometers() { var spec1 = new Spectrometer { ID = 0, SerialNumber = "Test 1", Reactor = 1, X = true }; return new[] { spec1, new Spectrometer { ID = 1, SerialNumber = "Test 2", Reactor = 2, X = false}, new Spectrometer { ID = 2, SerialNumber = "Test 3", Reactor = 3, X = true} }; } }
I'm making a web-application where I need a owl-carousel to be implemented. My component "TopEventSection.jsx" file: ``` import React, { useEffect } from "react"; import img from "../images/date.svg"; import 'jquery'; import $ from 'jquery' import 'owl.carousel' export default function TopEventSection() { useEffect(() => { $('.owl-carousel').owlCarousel({ loop: true, margin: 10, nav: true, responsive: { 0: { items: 1 }, 600: { items: 3 }, 1000: { items: 5 } } }); }, []); return ( <div className="top-event-section"> <div className="container"> <h2><span className="header-blue">ТОП</span> події:</h2> <div className="owl-carousel"> <div className="item"><img src={img} alt="img1" /></div> <div className="item"><img src={img} alt="img2" /></div> <div className="item"><img src={img} alt="img3" /></div> </div> </div> </div> ) } ``` I've also checked the package.json for these dependecies. They are installed. I've also imported jquery and owl-carousel into index.html: ``` <script src="https://code.jquery.com/jquery-3.7.1.js" integrity="sha256-eKhayi8LEQwp4NKxN+CfCh+3qOVUtJn3QNZ0TciWLP4=" crossorigin="anonymous"></script> <script src="owlcarousel/owl.carousel.min.js"></script> ``` Nothing of these have worked...
jquery__WEBPACK_IMPORTED_MODULE_2___default(...)(...).owlCarousel is not a function (React)
|javascript|jquery|reactjs|web|owl-carousel|
If you are using NextJS, it is best to use the **cookies-next** library https://www.npmjs.com/package/cookies-next Try this: setCookie('cookie_name', cookie_value, { httpOnly: true }) For fetch request, useEffect(() => { const setCookie = async () => { try { const response = await fetch('/api/set-cookie', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ cookieName: 'myCookie', cookieValue: 'cookieValue' }) }); if (response.ok) { console.log('Cookie set successfully'); } else { console.error('Failed to set cookie'); } } catch (error) { console.error('Error setting cookie:', error); } }; setCookie(); }, []);
I'm trying to deploy a lambda function in account A with a zip file from S3 bucket in account B. The problem is that the S3 public access is disabled for account B. Is it possible to get this done without using an S3 bucket policy. I tried the below cross account IAM setup and it does not work. Account B IAM role: S3 access inline policy: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1608150488840", "Effect": "Allow", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::the-code-bucket/*" } ] } ``` Trust policy: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "<Account-A-Id>" }, "Action": "sts:AssumeRole" } ] } ``` A pre-signed S3 URL of the source code zip file also gave access denied error in lambda.
In my project i using tkextrafont. `Font(file="resources/fonts/cs_regular.ttf")` In IDE my program successfully working, but i starting compiled program i see this problem: `WinError[2] Сannot find the specified path 'C:\\Users\\User\\AppData\\Local\\Temp\\_MEI137482\\tkextrafont'` I try using `os.getcwd()`, but this method don't working. Like this: https://stackoverflow.com/questions/76620807/how-to-use-custom-font-with-auto-py-to-exe-and-tkextrafont https://i.stack.imgur.com/jl58q.png My code: ``` import customtkinter from tkextrafont import Font app = customtkinter.CTk() app.title("my app") app.geometry("400x150") Font(file=fetch_resource('font.ttf')) logo_label = customtkinter.CTkLabel(self.sidebar_frame, text="Text",font=('Font name', 30)) logo_label.grid(row=0, column=0, padx=20, pady=(20, 10)) app.mainloop() ``` FileNotFoundError: [WinError 2]: 'C:\\Users\\test\\AppData\\Local\\Temp\\_MEI21562\\tkextrafont'
This code analyzes textual ice cream reviews using OpenAI's GPT-3.5-turbo model and saves the analysis results in JSON format. Here are the steps it follows 1. analyze_text function: Accepts the text of reviews and the starting identifier (start_id) for numbering reviews. Builds and sends a request to the GPT-3.5-turbo model, containing instructions for analyzing reviews and generating a JSON array of objects. Each object reflects a separate review indicating the topics mentioned and their tone (positive, negative, neutral). Returns the result of the model as a string, assumed to be JSON formatted text. 2.is_valid_json function: Checks whether the parsing result is a valid JSON array of objects. Makes sure that each object contains at least one mentioned topic with the key "mentioned": true. 3. process_reviews_and_save_to_json function: Reads text reviews from a file at the specified path. Processes reviews in batches (by default, 10 reviews at a time), using the analyze_text function to generate an analysis of each batch. Checks and validates the JSON received from the model for compliance with the specified criteria. Saves all successfully parsed and validated results to the JSON output file specified in the function parameters. The problem is that I send the same original prompt every time. Is it possible to send it once, the chat will remember it, and then I can just send batches of 10 lines for analysis every time without sending instructions every time? ``` import openai import json openai.api_key = "" def analyze_text(text,start_id): prompt =(f""" Analyze the provided ice cream reviews and generate a JSON array of objects, where each object corresponds to a single review. Here is the rule for the JSON structure: If a topic within a review is mentioned, include it with 'mentioned': true and provide the sentiment. If a topic within a review is not mentioned, that topic should not be included in the JSON object at all. Start with the following reviews and construct the JSON array accordingly. For each review, include the following topics only if they are mentioned: {{ "id": {start_id}, // Use start_id for numeration.\n // Include this field only if the taste or composition is mentioned in the review\n "taste_and_composition": {{"mentioned": true, "sentiment": "<positive/negative/neutral>"}}, // Set 'mentioned' to true if the review mentions the taste or composition of the ice cream, including sweetness, creaminess, presence of mix-ins, flavor of coatings, and waffle taste\n // Include this field only if the quality and freshness are mentioned in the review\n "quality_and_freshness": {{"mentioned": true, "sentiment": "<positive/negative/neutral>"}}, // Set 'mentioned' to true if the review mentions the quality and freshness of the ice cream, including its texture and consistency\n // Include this field only if the packaging is mentioned in the review\n "packaging": {{"mentioned": true, "sentiment": "<positive/negative/neutral>"}}, // Set 'mentioned' to true if the review mentions the packaging of the ice cream, including the size of the portion and the packaging itself\n // Include this field only if the price and value are mentioned in the review\n "price_and_value": {{"mentioned": true, "sentiment": "<positive/negative/neutral>"}}, // Set 'mentioned' to true if the review mentions the price of the ice cream and its value for the money\n // Include this field only if the delivery is mentioned in the review\n "delivery": {{"mentioned": true, "sentiment": "<positive/negative/neutral>"}} // Set 'mentioned' to true if the review comments on the delivery of the ice cream, including the condition of the ice cream upon delivery\n }} Remove any topic that is not actually mentioned from the final JSON object for that review. The final JSON should only contain topics with 'mentioned': true.\n And every review should have al least one topic 100% percent. Reviews: {text} """ ) completion = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "user", "content": prompt} ], temperature=0.7, max_tokens=2300, top_p=1.0, frequency_penalty=0.0, presence_penalty=0.0 ) return completion.choices[0].message['content'].strip() def is_valid_json(json_array): if not isinstance(json_array, list): print("Ошибка: ожидался список объектов JSON.") return False for obj in json_array: if not isinstance(obj, dict): print("Ошибка: каждый элемент в списке должен быть словарём.") return False # Извлекаем топики, которые являются словарями topics = [value for value in obj.values() if isinstance(value, dict)] # Проверяем, что каждый топик имеет mentioned: true if not all(topic.get("mentioned") for topic in topics): return False # Проверяем, что хотя бы один топик присутствует if not topics: return False return True def process_reviews_and_save_to_json(file_path, output_json_path='analyzed_reviews.json', max_attempts=10): results = [] current_id = 1 total_lines_processed = 0 with open(file_path, 'r', encoding='utf-8') as f: while True: lines = [] try: for _ in range(10): # Измените это значение в соответствии с вашими потребностями line = next(f).strip() if line: lines.append(line) except StopIteration: break # Достигнут конец файла if not lines: break # Если не осталось строк для обработки, завершаем цикл attempts = 0 while attempts < max_attempts: text_part = " ".join(lines) analyzed_text = analyze_text(text_part,current_id) # Проверка на пустой или некорректный вывод перед декодированием if analyzed_text.strip(): # Убедитесь, что строка не пустая try: analyzed_json = json.loads(analyzed_text) if is_valid_json(analyzed_json): results.extend(analyzed_json) total_lines_processed += len(lines) current_id += len(lines) print(f"Обработано {total_lines_processed} строк.") break else: print("Попытка не прошла валидацию, повторяем...") attempts += 1 except json.JSONDecodeError as e: print(f"Ошибка при декодировании JSON: {e}, повторяем...") attempts += 1 else: print("Получен пустой ответ от модели, повторяем...") attempts += 1 with open(output_json_path, 'w', encoding='utf-8') as f: json.dump(results, f, ensure_ascii=False, indent=4) print(f"Результаты обработки сохранены в файл: {output_json_path}") text_file_path = '' output_json_path = '' # Вызов функции для обработки отзывов и сохранения в JSON process_reviews_and_save_to_json(text_file_path, output_json_path=output_json_path) ```
I am new to html and css. I tried writing some code for a webpage and it looked good on desktop view but I noticed that in mobile view, the right half of the page is whitespace. To fix this, I have tried to make the parent elements width '100%' and the child element width to 'auto', as you can see from the code below. Would really appreciate if you could link videos that will help me understand the concept needed to fix this!! HTML CODE: ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link href="./resources/css/index.css" rel="stylesheet"> <title>Welcome Page</title> </head> <body> <header> <img class="logo" src="logo.svg" alt="logo"> <nav> <ul class="nav"> <li><a href="#">Home</a></li> <li><a href="#">Services</a></li> <li><a href="#">Project</a></li> </ul> </nav> <input class="box" type="text" placeholder="search"> <a href="#"><button>Search</button></a> </header> <main> <h1>Hello World</h1> <p>Our app will help Road desing engineers to manage traffic. This app will use AI technology to plan roads and traffic lights, so you won't have to wait in traffic. </p> </main> </body> </html> ``` CSS CODE: ``` * { box-sizing: border-box; margin: 0; padding: 0; } li, a, button, input { font-family: "Helvetica", sans-serif; text-decoration: none; font-size: 18px; color: azure; font-weight: 500; } header { display: flex; justify-content: space-between; align-items: center; padding: 20px 7%; background-color: cadetblue; width: 100%; } .logo { cursor: pointer; width: 70px; height: 70px; margin-right: 100px; } .nav { list-style: none; } .nav li { display: inline-block; padding: 0 20px; } .nav li a { transition: all 0.35s ease 0s; } .nav li a:hover { color: greenyellow; } button { background-color: greenyellow; color: azure; transition: all 0.3s ease 0s; padding: 10px 25px; border: none; border-radius: 50px; cursor: pointer; } button:hover { background-color: darkseagreen; opacity: 0.8; } .box { margin-left: 100px; margin-right: 20px; height: 40px; width: auto; padding: 10px 20px; background-color: azure; border-radius: 30px; box-shadow: 0 10px 25px rgba(112, 128, 144, 0.3); color: black; } .box input { width: 0; outline: none; border: none; font-weight: 500; transition: all 0.3s ease 0s; background: transparent; } h1 { color: greenyellow; font-family: Helvetica, Arial, sans-serif; font-size: 150px; text-align: center; margin: 150px auto; width: auto; } p { font-family: Helvetica, Arial, sans-serif; font-size: 40px; text-align: center; margin-bottom: 100px; } main { background-color: blanchedalmond; padding: 100px; width: 100%; } ```
I figured that the solution is to create a processing job - but that is a whole another thing by itself :(
null
The `torchmetrics.classification.BinaryAccuracy` [documentation](https://lightning.ai/docs/torchmetrics/stable/classification/accuracy.html#torchmetrics.classification.BinaryAccuracy) states that : > If preds is a floating point tensor with values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element. However, logits values range from -inf to +inf. There is thus a non-null probability that a set of computed values all lie withing the [0,1] range, and are thus interpreted are likelihood. Am I correct? In that case, when working we logit, is it safer to manually call the sigmoid function before calling the BinaryAccuracy computation? Here is a small python code illustrating the issue. ```python import torch.nn.functional from torchmetrics.classification import BinaryAccuracy from torch import Tensor # BinaryAccuracy documentation: # https://lightning.ai/docs/torchmetrics/stable/classification/accuracy.html#torchmetrics.classification.BinaryAccuracy def main(): accuracy = BinaryAccuracy() logit_example_01 = Tensor([0.01, 0.99]) print(torch.nn.functional.sigmoid(logit_example_01)) # tensor([0.5025, 0.7311]) logit_example_02 = Tensor([0.01, 1.01]) print(torch.nn.functional.sigmoid(logit_example_02)) # tensor([0.5025, 0.7311]) assert accuracy(logit_example_01, Tensor([1, 1])) == 0.5 # logit erroneously interpreted as likelihood? assert accuracy(logit_example_02, Tensor([1, 1])) == 1.0 assert accuracy(torch.nn.functional.sigmoid(logit_example_01), Tensor([1, 1])) == 1. # expected value ```
Can torchmetrics BinaryAccuracy incorrectly interprets logits as likelihoods?
|python|pytorch-lightning|torchmetrics|
What is the best / correct way to create a `url` which needs to be passed to `sqlalchemy.create_engine`? https://docs.sqlalchemy.org/en/20/core/engines.html#sqlalchemy.create_engine My connection string looks similar to this: `con_str = "Driver={ODBC Driver 17 for SQL Server};Server=tcp:somedb.database.windows.net,1433;Database=somedbname;Uid=someuser;Pwd=some++pass=;Encrypt=yes;TrustServerCertificate=no"` If I do (https://stackoverflow.com/questions/15750711/connecting-to-sql-server-2012-using-sqlalchemy-and-pyodbc): ``` import urllib import sqlalchemy as sa connection_url = sa.engine.URL.create( "mssql+pyodbc", query={"odbc_connect": urllib.parse.quote_plus(con_str)}, ) print(connection_url.render_as_string(hide_password=False)) ``` I get this output: ``` mssql+pyodbc://?odbc_connect=Driver%3D%7BODBC+Driver+17+for+SQL+Server%7D%3BServer%3Dtcp%3Asomedb.database.windows.net%2C1433%3BDatabase%3Dsomedbname%3BUid%3Dsomeuser%3BPwd%3Dsome%2B%2Bpass%3D%3BEncrypt%3Dyes%3BTrustServerCertificate%3Dno ``` But if I do (https://stackoverflow.com/questions/66371841/how-do-i-use-sqlalchemy-create-engine-with-password-that-includes-an): ``` connection_url = sa.engine.URL.create( drivername="mssql+pyodbc", username="someuser", password="some++pass=", host="tcp:somedb.database.windows.net", port=1433, database="somedbname", query={'driver': 'ODBC Driver 17 for SQL Server', 'encrypt': 'yes', 'trustservercertificate': 'no'}, ) print(connection_url.render_as_string(hide_password=False)) ``` I get a different output: ``` mssql+pyodbc://someuser:some++pass%3D@[tcp:somedb.database.windows.net]:1433/somedbname?driver=ODBC+Driver+17+for+SQL+Server&encrypt=yes&trustservercertificate=no ``` Both of them work for general reads but for more obscure uses **they produce different results**. For example, for a particular piece of code the former option works while the latter option throws `('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Implicit conversion from data type nvarchar(max) to binary is not allowed. Use the CONVERT function to run this query. (257) (SQLExecDirectW)')`. I am assuming the former is correct since the majority of stackoverflow answers provide it as an example. I am interested why different parameters produce such different results and where can I read about it on https://docs.sqlalchemy.org/?
Deploy AWS lambda with cross account S3 code source without a bucket policy
|amazon-web-services|amazon-s3|aws-lambda|
null
I am using different numerical methods to understand the results yielded from different types of integrators at different time steps. I am comparing the performance of each integration method by calculating the Mean Absolute Error of the predicted energy with the analytical solution: `$$ MAE = \frac{1}{n} \sum_{i=0}^{n}\left |y_{analytical}-y_{numerical}\right| $$` Then for different time-steps I am calculating the resulting MAE and plotting the results in a log vs. log plot as shown below. [log (MAE) vs. log(Time_step)](https://i.stack.imgur.com/ozMkL.png) The relation between MAE and time-step matches my expectations (the Verlet Method scales quadratically and the Euler Cromer method scales linearly), but I am noticing that the Verlet method has a turning point at about 10^(-4) s. This seems slightly too large and I was expecting instead a turning point to arise at time-steps closer to 10^(-8) s as I am using numpy's float64, hence there are about 15 to 17 decimal places of precision. I went onto plot the maximum and minimum errors obtained for each time step (Excluding iteration 0 as those are the initial conditions which are the same for both numerical and analytical methods) and these are the results: [log (Min Err) vs. log(Time_step)](https://i.stack.imgur.com/Thyjd.png) [log (Max Err) vs. log(Time_step)](https://i.stack.imgur.com/dXxpA.png) Again when plotting the maximum error I obtain a minimum of similar value compared to the previous plot, but plotting the minimum obtained error (these always happened in the first few iterations after the initial conditions) I obtain that the errors seem to flatten out at 10^(-4) s and approach errors of about 10^(-15) J in the energy. Because of this flattening of the minimum errors, it makes sense that going further than 10^(-4) s does not increase the precision of the Verlet's method, but I cant explain why the maximum errors grow after this point. An explanation that comes to mind is the round off error cause by float64 that should happen when values reach about 10^(-15), 10^(-17). I have manually checked the position, velocity and acceleration that result from running the verlet method but their lowest values are of order 10^(-9), very far from 10^(-15). (1) Is it possible that I am introducing a round off error when I am calculating the residual error from the analytical and the verlet's method? (2) Are there other more appropriate ways of calculating the error? (I thought MAE was a good fit because the verlet method oscillates about the true system values) (3) Are there tweaks that could be done to show possible flaws within my analysis, I have looked at my code extensively and I am not able to find any bugs, furthermore, the Verlet method I coded does have an error which scales quadratically with time step which makes me think that the code itself is fine. (Maybe a possible attempt would be to use float128 and ensure its used throughout all calculations and then see if the above plots differ?) Thanks in advance for any help with the above questions
I updated a-frame version to 1.5.0 but I have an issue with env maps. [https://glitch.com/~better-incongruous-trollius](https://glitch.com/~better-incongruous-trollius) ``` <html> <head> <script src="https://aframe.io/releases/1.5.0/aframe.min.js"></script> </head> <body> <a-scene id="myscene" light="defaultLightsEnabled: false"> <a-assets> <a-cubemap id="worldenvmap"> <img crossorigin="anonymous" src="https://cdn.glitch.global/93cb2d31-d256-4fcb-8b8e-db12192ccc88/envYup00.jpg?v=1711444964960"> <img crossorigin="anonymous" src="https://cdn.glitch.global/93cb2d31-d256-4fcb-8b8e-db12192ccc88/envYup01.jpg?v=1711444965270"> <img crossorigin="anonymous" src="https://cdn.glitch.global/93cb2d31-d256-4fcb-8b8e-db12192ccc88/envYup02.jpg?v=1711444965584"> <img crossorigin="anonymous" src="https://cdn.glitch.global/93cb2d31-d256-4fcb-8b8e-db12192ccc88/envYup03.jpg?v=1711444966011"> <img crossorigin="anonymous" src="https://cdn.glitch.global/93cb2d31-d256-4fcb-8b8e-db12192ccc88/envYup04.jpg?v=1711444966332"> <img crossorigin="anonymous" src="https://cdn.glitch.global/93cb2d31-d256-4fcb-8b8e-db12192ccc88/envYup05.jpg?v=1711444966628"> </a-cubemap> <a-asset-item id="sphere" src="https://cdn.glitch.global/93cb2d31-d256-4fcb-8b8e-db12192ccc88/model.obj?v=1711447695860"></a-asset-item> <a-asset-item id="balls" src="https://cdn.glitch.global/93cb2d31-d256-4fcb-8b8e-db12192ccc88/MetalRoughSpheres-(1)%20(1).glb?v=1711444768035"></a-asset-item> </a-assets> <a-obj-model id="spherem" src="#sphere" position="16 -4 -30" scale=".1 .1 .1"></a-obj-model> <a-entity id="ballstest" gltf-model="#balls" position="0 1 -8" material="envMap:#worldenvmap"> </a-entity> <a-sky color="#ECECEC"></a-sky> </a-scene> </body> <script> setenvmap('ballstest'); setenvmap('spherem'); function setenvmap(assetId) { console.log('### -> ',assetId); // setTimeOut represents app stage or user interaction setTimeout(()=>{ let envMapS = document.querySelector('#worldenvmap'); if (envMapS) { const paths = []; envMapS.childNodes.forEach(c => { if (c.src) { paths.push(c.src); } }); if (paths.length === 6) { const loader = new THREE.CubeTextureLoader(); loader.setCrossOrigin('anonymous'); // loader.setCrossOrigin('Access-Control-Allow-Origin'); loader.setPath(''); loader.format = THREE.RGBAFormat; // e.g. THREE.RGBFormat this.cubeTex = loader; this.texture = loader.load(paths); } else { console.error(`Invalid environment map on element '${el}'`) } } let thisel = document.getElementById(assetId); const mesh = thisel.getObject3D('mesh'); const envMap = this.texture; envMap.encoding = THREE.sRGBEncoding; if (mesh) { mesh.traverse(function(node) { if (node.material && 'envMap' in node.material) { node.material.envMap = envMap; node.material.needsUpdate = true; } }); } }, 4444) } </script> </html> ``` in codes, scene has no lights also `defaultLightsEnabled: false` All objects are black(no light) before applying envMap. After applying envMap to objects, they are acting like there are lights in the scene. This doesn't happen with A-frame 1.3.0, 1.4.0 and 1.4.2 Am I missing something with a-frame 1.5.0?
SqlAlchemy Correct way to create url for engine
|python|sqlalchemy|
the response is not exactly as the format as you want but you can do it with only one `sort` and a `group` stage test it [here][1] ``` db.collection.aggregate([ { "$sort": { "InsertedAt": 1 } }, { "$group": { "_id": null, "first": { "$first": "$$ROOT" }, "last": { "$last": "$$ROOT" } } }, { "$project": { _id: 0 } } ]) ``` [1]: https://mongoplayground.net/p/MrHFclX3YiT
Using the toy `DATA` below, I'm trying to have `my_function()` to **white out (i.e., not show)** the bars in the `geom_bar` ONLY when the y axis value for a bar is < .5 or > .95. Otherwise, simply fill the color based on the value of `X` variable that is, `fill = X`. For my purposes, I'm using the `after_stat(count)/ sapply(PANEL, \(x) sum(count[PANEL == x]))` to compute the y axis values internally. Is there a way to achieve my goal (or somehow suppress those bars)? *Here is what I tried without success:* ``` library(tidyverse) library(rlang) DATA <- structure(list(Year = c(2015, 2015, 2015, 2015, 2015, 2016, 2016, 2016, 2016, 2016), X = c("Dissatisfied", "Dissatisfied", "Dissatisfied", "Dissatisfied", "Dissatisfied", "Satisfied", "Dissatisfied", "Dissatisfied", "Satisfied", "Dissatisfied")), class = "data.frame", row.names = c(NA, -10L)) my_function <- function(data=DATA, x=X, cols = vars(Year), fill=X){ ggplot(data) + aes(x= !!ensym(x), y=after_stat(count)/ sapply(PANEL, \(x) sum(count[PANEL == x])) , fill=ifelse( round(after_stat(count)/ sapply(PANEL, \(x) sum(count[PANEL == x])) * 100, 2) < 5| round(after_stat(count)/ sapply(PANEL, \(x) sum(count[PANEL == x])) * 100, 2) > 95, "white" , "fill")) + ## what to use for `fill=` here? geom_bar() + facet_grid(cols=cols) + theme(legend.title=element_blank()) } ###### EXAMPLE OF USE: my_function() ```
null
CSRF token set to null after a put
``` import torch import torch.nn as nn import torch.nn.functional as F from smplx import SMPL from einops import rearrange from models.loss import Loss from transformers import CLIPProcessor, CLIPModel from utils.utils import get_keypoints from models.module import MusicEncoderLayer, MotionDecoderLayer import math class GPT(nn.Module): def __init__(self, p=2, \ input_size=438, embed_size=512, num_layers=6, heads=8, forward_expansion=4, dropout=0.1, output_size=75): super(GPT, self).__init__() max_len, max_per = 450, 6 self.motion_pos_emb_t = nn.Parameter(torch.zeros(max_len, embed_size)) # self.motion_pos_emb_p = nn.Parameter(torch.zeros(max_per, embed_size)) # self.music_pose_emb_t = nn.Parameter(torch.zeros(max_len, embed_size)) self.music_emb = nn.Linear(input_size, embed_size) self.motion_emb = nn.Linear(output_size, embed_size) self.text_encoder = TextEncoder() self.music_encoder = MusicEncoder(embed_size, num_layers, heads, forward_expansion, dropout) self.motion_decoder = MotionDecoder(embed_size, num_layers, heads, forward_expansion, dropout, output_size) # self.mask = generate_square_subsequent_mask(max_len, 'cuda') # self.mask = self.mask.masked_fill(self.mask==0, float('-inf')).masked_fill(self.mask==1, float(0.0)) self.loss = nn.MSELoss() # self.loss = Loss() def forward(self, text, music, motion): motion_src, motion_trg = motion[:, :, :-1, :], motion[:, :, 1:, :] b, p, t, _ = motion_src.shape text_encode = self.text_encoder(text) music_encode = self.music_encoder(self.music_emb(music[:, :-1, :]))\ .reshape(b, 1, t, -1).repeat(1, p, 1, 1).reshape(b*p, t, -1) mask = torch.nn.Transformer().generate_square_subsequent_mask(t).transpose(0, 1).cuda() motion_emb = self.motion_emb(motion_src) + self.motion_pos_emb_t[:t, :].reshape(1, 1, t, -1).repeat(b, p, 1, 1) motion_pred = self.motion_decoder(motion_emb, music_encode, mask=mask).reshape(b, p, t, -1) loss = self.loss(motion_pred, motion_trg) return motion_pred, loss def inference(self, text, music, motion): self.eval() with torch.no_grad(): music, motion = music[:, :-1, :], motion[:, :, :-1, :] b, p, t, c = motion.shape music_encode = self.music_encoder(self.music_emb(music))\ .reshape(b, 1, t, -1).repeat(1, p, 1, 1).reshape(b*p, t, -1) preds = torch.zeros(b, p, t, c).cuda() preds[:, :, 0, :] = motion[:, :, 0, :] mask = torch.nn.Transformer().generate_square_subsequent_mask(t).transpose(0, 1).cuda() for i in range(1, t): motion_emb = self.motion_emb(preds) + self.motion_pos_emb_t[:t, :].reshape(1, 1, t, -1).repeat(b, p, 1, 1) current_pred = self.motion_decoder(motion_emb, music_encode, mask=mask).reshape(b, p, t, -1) preds[:, :, i, :] += current_pred[:, :, i-1, :] motion_pred = preds.reshape(b, p, t, -1) print(motion_pred[0, 0, :10, :6]) import sys sys.exit() pred_keypoints = get_keypoints(motion_pred) return {'keypoints': pred_keypoints, 'smpl': motion_pred} class MusicEncoder(nn.Module): def __init__(self, embed_size, num_layers, heads, forward_expansion, dropout): super(MusicEncoder, self).__init__() self.layers = nn.ModuleList( [nn.TransformerEncoderLayer(d_model=embed_size, nhead=heads, dim_feedforward=embed_size*forward_expansion, \ dropout=dropout, batch_first=True) for _ in range(num_layers)] ) def forward(self, x): b, t, _ = x.shape out = x for layer in self.layers: out = layer(out) return out class MotionDecoder(nn.Module): def __init__(self, embed_size, num_layers, heads, forward_expansion, dropout, output_size): super(MotionDecoder, self).__init__() self.num_layers = num_layers self.fc_out = nn.Linear(embed_size, output_size) self.layers = nn.ModuleList( [nn.TransformerDecoderLayer(d_model=embed_size, nhead=heads, dim_feedforward=embed_size*forward_expansion, \ dropout=dropout, batch_first=True) for _ in range(num_layers)] ) def forward(self, motion_src, music_text_encode, mask=None): b, p, t, _ = motion_src.shape out = motion_src.reshape(b*p, t, -1) for layer in self.layers: out = layer(out, music_text_encode, tgt_mask=mask) return self.fc_out(out) class TextEncoder(nn.Module): def __init__(self): super(TextEncoder, self).__init__() self.text_clip = CLIPModel.from_pretrained("./Pretrained/CLIP/Model") self.text_processor = CLIPProcessor.from_pretrained("./Pretrained/CLIP/Processor") def forward(self, texts): texts_process = self.text_processor(text=texts, return_tensors="pt", padding=True, truncation=True) text_process = {name: tensor.to(self.text_clip.device) for name, tensor in texts_process.items()} text_output = self.text_clip.get_text_features(**text_process) return text_output ``` [![enter image description here](https://i.stack.imgur.com/wIXUO.png)](https://i.stack.imgur.com/wIXUO.png) I tend to finish a Music2Dance Task, Dance is the SMPL-Data, Music is a 439-dimension feature, and I have aligned their FPS. The training loss is decrease, but the inferecne result is absolutely wrong, the frames after the second's is the same. above is the error log and my code, please help me to find out the mistakes! Thanks!
I am trying to make a WiFi boardcast application in Python. The idea is to place two network interface cards (NICs) in monitor mode, and inject packets such that two devices can communicate. This has been done before, especially in the context of drone RC/telemetry/video links, some examples include [OpenHD][1], [ez-Wifibroadcast][2] and [WFB-NG][3]. I specifically tested my hardware on [OpenHD][1] and was able to achieve a bitrate of ~4 MBits/s (2 x Raspberry Pi 3B+, 2 x TL-WN722N V2, 1 USB camera). I put the two NICs into monitor mode, confirmed with `iwconfig`. They are also at the same frequency. In making my Python application, I noticed a very poor bitrate and packet loss. I created a test script to demonstrate: import socket from time import perf_counter, sleep import binascii import sys import threading class PacketLoss: def __init__(self, transmitter_receiver, size, iface1, iface2): self.transmitter_receiver = transmitter_receiver self.iface1, self.iface2 = iface1, iface2 self.send = True self.counting = False self.packet_recv_counter = 0 self.packet_send_counter = 0 self.t_target = 1 self.t_true = None payload = (0).to_bytes(size, byteorder='little', signed=False) self.payload_length = len(payload) h = bytes((self.payload_length).to_bytes(2, byteorder='little', signed=False)) radiotap_header = b'\x00\x00\x0c\x00\x04\x80\x00\x00' + bytes([6 * 2]) + b'\x00\x18\x00' frame_type = b'\xb4\x00\x00\x00' self.msg = radiotap_header + frame_type + transmitter_receiver + h + payload def inject_packets(self): rawSocket = socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.htons(0x0004)) rawSocket.bind((self.iface1, 0)) t0 = perf_counter() while (self.send): rawSocket.send(self.msg) if self.counting: self.packet_send_counter += 1 self.t_send_true = perf_counter() - t0 rawSocket.close() def sniff_packets(self): s = socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.htons(0x0003)) s.bind((self.iface2, 0)) t0 = perf_counter() self.counting = True while perf_counter() - t0 < self.t_target: packet = s.recv(200) match = packet[22:34] if match == self.transmitter_receiver: self.packet_recv_counter += 1 self.t_true = perf_counter() - t0 self.send = False self.counting = False s.close() def get_stats(self): dr_send = self.packet_send_counter * self.payload_length / self.t_send_true dr_recv = self.packet_recv_counter * self.payload_length / self.t_true packet_loss = 1 - self.packet_recv_counter/self.packet_send_counter return dr_send, dr_recv, packet_loss def print_statistics(self): print(f'In {self.t_true:.3f}s, sent {self.packet_send_counter} captured {self.packet_recv_counter} packets.') dr_send, dr_recv, packet_loss = self.get_stats() print(f'{dr_send=:.1f}B/s; {dr_recv=:.1f}B/s; Packet loss: {packet_loss*100:.1f}%') I tested `PacketLoss` with the following: if __name__ == '__main__': # Get test parameters iface1, iface2 = sys.argv[1], sys.argv[2] # eg 'wlan0', 'wlan1' size = int(sys.argv[3]) # eg 60 # Recv/transmit mac addresses MAC_r = '35:eb:9e:3b:75:33' MAC_t = 'e5:26:be:89:65:27' receiver_transmitter = binascii.unhexlify(MAC_r.replace(':', '')) + binascii.unhexlify(MAC_t.replace(':', '')) # create testing object pl = PacketLoss(receiver_transmitter, size, iface1, iface2) # start injecting packets t = threading.Thread(target=pl.inject_packets) t.start() # wait a bit sleep(0.1) # start sniffing pl.sniff_packets() # print statistics pl.print_statistics() I tested with packet sizes of 30, 60 and 120, getting the following results: | Packet size (B) | Received data rate (kB/s) | Packet loss (%) | | ----------- | -------------------- | --------------- | | 30 | 32.3 | 47.3 | | 60 | 51.3 | 57.2 | | 120 | 63.9 | 73.0 | Eventually I want to use my program to stream video (and RC/telemetry), and I would need at least 125 kB/s received data rate, and a much better packet loss. Am I overlooking something that would increase the data rate and reduce the packet loss? Thank you [1]: https://github.com/OpenHD/OpenHD [2]: https://github.com/rodizio1/EZ-WifiBroadcast [3]: https://github.com/svpcom/wfb-ng
Poor performance when sending/receiving in monitor mode
|python|wifi|raspberry-pi3|
Lock the row references using `$`: ``` Formula1:="=" & ws1.Name & "!" & "A$1:A$" & aantalrijen2 ``` Or just make the whole reference absolute: ``` Formula1:="=" & ws1.Name & "!" & "$A$1:$A$" & aantalrijen2 ```
returncode: 0 stdout: > server@1.0.0 start > node index.js MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://www.mongodb.com/docs/atlas/security-whitelist/ at _handleConnectionErrors (/home/vaishna3/nodevenv/public_ftp/18/lib/node_modules/mongoose/lib/connection.js:875:11) at NativeConnection.openUri (/home/vaishna3/nodevenv/public_ftp/18/lib/node_modules/mongoose/lib/connection.js:826:11) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) { reason: TopologyDescription { type: 'ReplicaSetNoPrimary', servers: Map(3) { 'ac-ngyyajr-shard-00-00.d0i0ct2.mongodb.net:27017' => [ServerDescription], 'ac-ngyyajr-shard-00-01.d0i0ct2.mongodb.net:27017' => [ServerDescription], 'ac-ngyyajr-shard-00-02.d0i0ct2.mongodb.net:27017' => [ServerDescription] }, stale: false, compatible: true, heartbeatFrequencyMS: 10000, localThresholdMS: 15, setName: 'atlas-t70b7g-shard-0', maxElectionId: null, maxSetVersion: null, commonWireVersion: 0, logicalSessionTimeoutMinutes: null }, code: undefined } stderr: pleasve solve the 503 service unavailable in my database
I' attempting to build QT5 for use in a Beaglebone black in a Ubuntu 22.04 Virtualbox VM. I'm attempting to follow the guide here to simplify this process as much as possible: https://github.com/K3tan/BBB_QT5_guide?tab=readme-ov-file I'm coming up against a brick wall though and don't know how to get around it. I've used the configuration line ./configure -platform linux-g++ -release -device linux-beagleboard-g++ -sysroot /usr/local/linaro/sysroot -prefix ~/Qt5ForBBB -hostprefix ~/Qt5forBBB -device-option CROSS_COMPILE=/usr/local/linaro/linaro-gcc/bin/arm-linux-gnueabihf- -nomake tests -nomake examples -no-opengl -opensource -confirm-license -reduce-exports -make libs which seemed to complete without any errors. When the configuration script completed, it said something along the lines of "run gmake to build". So I did. Unfortunately, I'm running into the following issue: In file included from /home/tim/qt-everywhere-src-5.15.2/qtlocation/src/location/declarativemaps/qdeclarativepolylinemapitem.cpp:38:0: /home/tim/qt-everywhere-src-5.15.2/qtlocation/src/location/declarativemaps/qdeclarativepolylinemapitem_p_p.h:381:17: error: ‘const char* MapPolylineShaderLineStrip::vertexShader() const’ marked ‘override’, but does not override const char *vertexShader() const override { This is but one of many of the same type of error. All of the errors are const marked 'override'. I've also got this error home/tim/qt-everywhere-src-5.15.2/qtlocation/include/QtLocation/5.15.2/QtLocation/private/../../../../../src/location/declarativemaps/qdeclarativepolygonmapitem_p_p.h: In member function ‘virtual void MapPolygonShader::initialize()’: /home/tim/qt-everywhere-src-5.15.2/qtlocation/include/QtLocation/5.15.2/QtLocation/private/../../../../../src/location/declarativemaps/qdeclarativepolygonmapitem_p_p.h:186:23: error: ‘program’ was not declared in this scope m_matrix_id = program()->uniformLocation("qt_Matrix"); I would have thought that this would be a straightforward boilerplate process, but that does not appear to be the case. Does anyone have any idea what I can do to resolve these errors?
Install the cuda tools to resolve the missing library.
In RESTful design principles, the HTTP GET method is supposed to be safe and idempotent, meaning it should not modify the state of the server and should produce the same result regardless of how many times it's called. Performing resource deletion within an HTTP GET operation is a violation of these principles and can lead to several complications: **Violation of Idempotence:** By definition, GET requests should be idempotent, meaning multiple identical requests should have the same effect as a single request. Deleting a resource within a GET operation is not idempotent and goes against this principle. **Caching Issues:** GET requests are often cached by intermediaries (like proxies or CDNs) to improve performance. If a GET request results in resource deletion, caching mechanisms may become inconsistent, leading to potential issues with stale data. **Unintended Side Effects:** Users and developers expect that a GET request won't cause any changes on the server side. If a GET operation deletes a resource, it can lead to unexpected side effects, such as data loss or disruption of other functionalities that depend on the existence of the resource. **Security Concerns:** From a security perspective, allowing resource deletion via GET can expose your application to various vulnerabilities. For example, malicious actors might trick users into clicking on a link that performs a destructive action without their explicit consent. **Breaking Client Expectations:** Clients (applications or users) interacting with your API or website will likely assume that GET requests are read-only operations. If you break this expectation, it could lead to confusion and compatibility issues with existing clients. **Search Engine Crawlers:** Search engines and web crawlers often make GET requests to index content. If resource deletion occurs in response to these requests, it could lead to unpredictable behavior in terms of search engine indexing and ranking. **Non-Idempotent Operations:** Deleting a resource is inherently a non-idempotent operation. Introducing non-idempotent operations within the GET method goes against REST principles and can complicate the predictability and reliability of your API. **HTTP Method Semantics:** Using the DELETE method explicitly conveys the intention to delete a resource. Mixing deletion semantics with the GET method can lead to confusion and a lack of clarity in the API design.
a-frame 1.5.0 envMap with no-lights
|aframe|
null
{"OriginalQuestionIds":[55187884],"Voters":[{"Id":5298879,"DisplayName":"Zegarek"},{"Id":-1,"DisplayName":"Community","BindingReason":{"DuplicateApprovedByAsker":""}}]}