instruction
stringlengths
0
30k
βŒ€
null
The mistake is here: ```tsql IF (UPDATE(ParentPermissionId)) ``` The documentation for `UPDATE()` clearly says > `UPDATE()` returns `TRUE` regardless of whether an `INSERT` or `UPDATE` attempt is successful. In other words: there doesn't need to be any rows affected. And a trigger is always fired regardless of whether there are rows affected. You need to add this condition (and you can reverse it and immediately `RETURN` to avoid nesting the whole trigger function). ```tsql IF NOT (UPDATE(ParentPermissionId) AND EXISTS (SELECT 1 FROM inserted)) RETURN; ``` You should have this at the top of every trigger anyway, even no-recursive ones, to avoid running the code if there are no rows. ____ Having said that, there is a way of doing all this without using recursive triggers, such as a recursive CTE. For example, you could use the following trigger instead, which recurses using a recursive CTE in a single call rather than recursive calls. It will therefore allow recursion of more than 32 levels (but you should set `MAXRECURSION` in that case). I can't say whether or not this is more efficient than what you have, but it may be. ```tsql CREATE OR ALTER TRIGGER [Permissions_OnParentPermissionId_Change] ON [Permissions] AFTER INSERT,UPDATE AS SET NOCOUNT ON; IF TRIGGER_NESTLEVEL(@@PROCID) > 1 -- prevent any recursion OR NOT UPDATE(ParentPermissionId) -- not updated OR NOT EXISTS (SELECT 1 FROM inserted) -- no rows RETURN; -- early bail-out WITH cte AS ( SELECT i.PermissionId, i.ParentPermissionId, Ancestry = CONCAT(ISNULL(parent.Ancestry, '~'), i.PermissionId, '~'), 1 AS level FROM inserted i LEFT JOIN dbo.Permissions parent ON parent.PermissionId = i.ParentPermissionId UNION ALL SELECT child.PermissionId, child.ParentPermissionId, CONCAT(cte.Ancestry, child.PermissionId, '~'), cte.level + 1 FROM cte JOIN dbo.Permissions child ON child.ParentPermissionId = cte.PermissionId ), MaxLevel AS ( SELECT *, rn = ROW_NUMBER() OVER (PARTITION BY cte.PermissionId ORDER BY cte.level DESC) FROM cte ) UPDATE p SET Ancestry = c.Ancestry FROM dbo.Permissions p JOIN MaxLevel c ON c.PermissionId = p.PermissionId WHERE c.rn = 1; -- take only the most recursed value for each ID ``` [**db<>fiddle**][1] [1]: https://dbfiddle.uk/o81T98Lt
You could just check with the default `import os` if the file exists. import os database_uri = "C:\\" name_of_db = "Random.db" db_exists = os.path.isfile(database_uri + name_of_db)
If you use the `read_csv` from the `readr`package you have the argument `col_select` where you can select the columns to read.
I'm new to Azure and am trying to upload files (in tens of thousands) to Azure blob storage using their Python SDK. All examples that I came across on the web open a file before uploading it. Why is this necessary? I am concerned that if this will slow down the upload. Boto3 for AWS S3 doesn't do this. Can you please explain the reason behind this?
Why do we need to open a file to upload it to Azure Blob storage?
|azure|azure-blob-storage|azure-python-sdk|
We have a somewhat large VectorLayer comprising mostly LineStrings and Points. After refreshing the browser and opening all our features at first, everything seems to move fine and dandy. However, after modifying a few features, a lot of the things we do is suddenly very slow. Most noticeably setting styles and dragging a feature. I've looked into Chrome Performance DevTools and found something interesting. When a modification is done an event called `handleFeatureChange` is fired. At the start of the "session", that is after refreshing the browser, this event calls the functions `removeFeature` and `addFeature` perhaps one or two times. However, after modifying a few features here and there, those two functions are called *loads* of times, even if the LineString only contains like 3-4 points. This seems to be a consistent behaviour between every slowdown we have, but I have no clue as to why the function is called more times. Noticeably, panning and zooming is completely unaffected, so I do not think it is a result of using VectorLayer over VectorImageLayer or something similar. I've been looking into issues and the source code for the modification event, but I haven't found anything to understand why a single `handleFeatureChange` all of a sudden should call `removeFeature` and `addFeature` any more times on the same feature when other features are modified. I'll be happy to provide more information, but I'm not entirely sure what parts of our code that are interesting for this problem. Thanks!
Modifying features in an OpenLayers VectorLayer very slow after use
|openlayers|
null
Do we have Databricks cluster pools like option in Synapse spark pool as well? wherein i can have idle nodes to execute the code soon after notebook invocation. [![enter image description here][1]][1] Thanks Ravi [1]: https://i.stack.imgur.com/zMhbR.png
Synapse spark pool : have a pool of idle nodes to execute the code after invocation
|pyspark|azure-synapse|azure-synapse-analytics|
Given the prerequisites of your original problem description incl. the following we consider status-quo not to touch (thanks for being that honest sharing them.) > Since setting up a CI/CD environment is (of course depending on the whole tech-stack) quite a lot of work, someone like a DevOps is needed to be the "Godfather" of the system, but this doesn't mean that he is the one using this system, he is the one who takes care of it. When encountering the following problem. > * The Developer has quite often the attitude "i'm done as soon i have pushed the code". > * The DevOps takes care of the CI/CD pipeline, extends it, fixes it, ... you name it. But doesn't care about what the developed application does. Then this could be a sign that there are potential benefits in gaining better understanding of the CI/CD systems. Luckily, you already have a person that signs themselves responsible for keeping the concrete CI/CD systems operating, and you have developers pushing code. Good preconditions, but it looks like none is responsible for the deployments. Name IT and remove the impediment. Happy deployments. --- Ah and your concrete questions: > Who triggers (clicks on the button) the release to the Quality Environment and who triggers the release to the Productive Environment? If you don't know, then who should be able to say? As I wrote above, it looks like that no one is responsible for that and you let it happen by fortune/accident/sheer luck or throwing a dice. We don't know. > and why? Well, this depends on the project, some projects are under the requirement to actually deploy software. This is then the reason. This is normally known upfront. The short answer is: To install it. If you can't imagine who, task IT with software installation, that's what they should be comfortable with.
css a element selector overriding bottom element selectors
|html|css|css-specificity|
I/A (inactive/active) is not a good description of the data "next-of-kin", you should use something more accurate like "IsBloodRelated" (true/false) Database record Id, Desc, IsBloodRelated Database table entries 1 father, true 2, mother, true 3, brother, true 4, sister, true 5, grandfather, true 6, grandmother, true 7, uncle, true 8, aunt, true 9, nephew, true 10, niece, true 11, co-worker, false 12, friend, false 13, legacy-next-of-kin-invalid, false When the form is displayed, and the person's next of kin is NOT blood related (co-worker or friend), then the dropdown element could display "please update next of kin" or "legacy next of kin invalid" or the current next of kin value with "(invalid)" added to the end like "co-worker (invalid)"... When the form is submitted, you compare the selected option to the database list... If IsBloodRelated equals True, then the next of kin entry is valid If IsBloodRelated equals false, then the next of kin field is invalid and a new selection should be made from the dropdown before allowing saving / updating the record... To disable the dropdown, while keeping the entries, set it's attribute readonly="readonly" UPDATE I've pasted the sample page below. All the .NET code is in the page, not code-behind, a personal preference of mine, but it shouldn't be difficult to figure out which bits you need. Also I've hard coded the patients list and the next of kin list, rather than making a table but you should use your existing table and populate you next of Kin list as before. <%@ Page Language="VB" %> <!DOCTYPE html> <script runat="server"> ' '-> NextOfKin structure Private Structure NextOfKin Public Index As Byte Public Description As String Public IsBloodRelated As Boolean End Structure '-> NextOfKin List Dim pgKinDefns As System.Collections.Generic.List(Of NextOfKin) ' '-> Patients structure Private Structure Patient Public Index As Byte Public Name As String Public NxtOfKin As Byte End Structure '-> Patients List Dim pgPatients As System.Collections.Generic.List(Of Patient) ' '-> Page vars Dim pgTmpId As Byte '-> Page load event Protected Sub Page_Load(sender As Object, e As EventArgs) ' '-> Initialise pgKinDefns = New System.Collections.Generic.List(Of NextOfKin) pgPatients = New System.Collections.Generic.List(Of Patient) ' '-> Build list of kin defintions AddKinDefintion(0, "father", True) AddKinDefintion(1, "mother", True) AddKinDefintion(2, "brother", True) AddKinDefintion(3, "sister", True) AddKinDefintion(4, "grandfather", True) AddKinDefintion(5, "grandmother", True) AddKinDefintion(6, "uncle", True) AddKinDefintion(7, "aunt", True) AddKinDefintion(8, "nephew", True) AddKinDefintion(9, "niece", True) AddKinDefintion(10, "cousin", True) AddKinDefintion(11, "legacy-NoK co-worker", False) AddKinDefintion(12, "legacy-NoK friend", False) AddKinDefintion(13, "legacy-NoK neighbour", False) ' '-> Build list of patients AddPatientDefintion(0, "alpha", 0) AddPatientDefintion(1, "bravo", 1) AddPatientDefintion(2, "charlie", 2) AddPatientDefintion(3, "delta", 3) AddPatientDefintion(4, "echo", 4) AddPatientDefintion(5, "foxtrot", 5) AddPatientDefintion(6, "golf", 6) AddPatientDefintion(7, "hotel", 7) AddPatientDefintion(8, "india", 8) AddPatientDefintion(9, "juliet", 9) AddPatientDefintion(10, "kilo", 10) AddPatientDefintion(11, "lima", 11) AddPatientDefintion(12, "mike", 12) AddPatientDefintion(13, "november", 13) ' '-> Get current patient id If IsNothing(Session("CurPatient")) Then Session("CurPatient") = 0 If Not IsNothing(Request("hdnPatientId")) Then Session("CurPatient") = CByte(Request("hdnPatientId")) pgTmpId = Session("CurPatient") If Request("cmdPrv") = "Previous" Then If pgTmpId > 0 Then pgTmpId = pgTmpId - 1 End If If Request("cmdNxt") = "Next" Then If pgTmpId < 13 Then pgTmpId = pgTmpId + 1 End If ' End Sub ' Private Sub AddKinDefintion(ByVal pId As Byte, ByVal pDescription As String, ByVal pIsBloodRelated As Boolean) ' Dim Item As NextOfKin With Item .Index = pId .Description = Trim(pDescription) .IsBloodRelated = pIsBloodRelated End With pgKinDefns.Add(Item) ' End Sub ' Private Sub AddPatientDefintion(ByVal pId As Byte, ByVal pName As String, ByVal pKinId As Byte) ' Dim Item As Patient With Item .Index = pId .Name = Trim(pName) .NxtOfKin = pKinId End With pgPatients.Add(Item) ' End Sub ' Private Function ScriptDropdown(ByVal pSelId As Byte) As String ' Dim myThisId As Byte = 0 Dim myBuf As String = "" ' For myThisId = 0 To pgKinDefns.Count - 1 myBuf = myBuf & "<option value=" & Chr(34) & pgKinDefns(myThisId).Index & Chr(34) If pgKinDefns(myThisId).Index = pgPatients(pgTmpId).NxtOfKin Then myBuf = myBuf & " selected=" & Chr(34) & "selected" & Chr(34) End If myBuf = myBuf & ">" & pgKinDefns(myThisId).Description & "</option>" & vbCrLf Next ScriptDropdown = myBuf ' End Function ' Private Function GetComboStatus() As String ' GetComboStatus = "" If pgKinDefns(pgPatients(pgTmpId).NxtOfKin).IsBloodRelated Then GetComboStatus = "disabled=" & Chr(34) & "disabled" & Chr(34) End If End Function ' </script> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div style="margin-left: auto; margin-right: auto; width: 60%;"> <div id="FormFields"> Id:<br /> <input type="text" id="txtId" name="txtId" size="10" value="<%=pgPatients(pgTmpId).Index%>" readonly="readonly" /><br /><br /> Name:<br /> <input type="text" id="txtName" name="txtName" size="30" value="<%=Trim(pgPatients(pgTmpId).Name)%>" readonly="readonly" /><br /><br /> Next of Kin:<br /> <select id="Combo1" name="Combo1" size="1" <%=GetComboStatus()%>> <%=ScriptDropdown(pgTmpId)%> </select> <br /><br /> <input type="hidden" id="hdnPatientId" name="hdnPatientId" value="<%=pgTmpId%>" /> </div> <div id="PatientRecord"> Id:<br /> <span><%=pgPatients(pgTmpId).Index%></span><br /><br /> Name:<br /> <span><%=Trim(pgPatients(pgTmpId).Name)%></span><br /><br /> Next of Kin:<br /> <span><%=UCase(pgKinDefns(pgPatients(pgTmpId).NxtOfKin).Description)%></span><br /> <%If pgKinDefns(pgPatients(pgTmpId).Index).IsBloodRelated = False Then%> <span style="color: red;">This record contains a legacy next of kin, please amend next of kin</span><br /> <%End If%> <br /> </div> <div id="Naviagtion" style="float: right;"> <input type="submit" id="cmdPrv" name="cmdPrv" value="Previous" /> <input type="submit" id="cmdNxt" name="cmdNxt" value="Next" /> </div> </div> </form> </body> </html>
This might work <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> function orderDialogNodes(nodes) { return [...nodes].sort((a, b) => { if (a.previous_sibling === b.dialog_node) { return 1 } if (b.previous_sibling === a.dialog_node) { return -1 } return 0 }); } const inputArray = [ { type: "folder", dialog_node: "node_3_1702794877277", previous_sibling: "node_2_1702794723026", }, { type: "folder", dialog_node: "node_2_1702794723026", previous_sibling: "node_9_1702956631016", }, { type: "folder", dialog_node: "node_9_1702956631016", previous_sibling: "node_7_1702794902054", }, ]; const orderedArray = orderDialogNodes(inputArray); console.log(orderedArray); <!-- end snippet -->
I want to create a multidimensional array based on a string. the string has value `$string="1/2/3"` and I want to assign `$array[1][2][3]=something` actually the index of the array are described inside the $string The $string has not same depth. For example may be $string="1/2 OR $string="1/2/3/4/5 OR $string="1/2/3/5/7/8/9/9/6 so the number of keys in the multidimensional array is not standard.
Convert a slash-delimited string into an associative multidimensional array
null
I'm trying to write tests for endpoints that rely on information being present in the DB at the time the test is run So I try and get one of my "Apps" ```python def get_apps_list(): apps = App.query.limit(1) return apps APPS = get_apps_list() ``` I then use @pytest.mark.parametrize to pass that "app" to the test and use the id to test the endpoint like so : ```python @pytest.mark.parametrize('app', APPS) @patch('flask_jwt_extended.view_decorators.verify_jwt_in_request') def test_get_app_by_id(session, mock_verify, client, app): response = client.get('/application/get_by_id/' + app.id) assert response.status_code == 200 assert response.content_type == "application/json" assert b'Success: App Fetched successfully' in response.data ``` But I then get the error: ``` _____________________________________________________________________________________ ERROR collecting tests/test_apps.py ______________________________________________________________________________________ tests\test_apps.py:13: in <module> APPS = get_apps_list() tests\test_apps.py:9: in get_apps_list apps = App.query.limit(1) venv\Lib\site-packages\flask_sqlalchemy\model.py:23: in __get__ cls, session=cls.__fsa__.session() # type: ignore[arg-type] venv\Lib\site-packages\sqlalchemy\orm\scoping.py:221: in __call__ sess = self.registry() venv\Lib\site-packages\sqlalchemy\util\_collections.py:638: in __call__ key = self.scopefunc() venv\Lib\site-packages\flask_sqlalchemy\session.py:111: in _app_ctx_id return id(app_ctx._get_current_object()) # type: ignore[attr-defined] venv\Lib\site-packages\werkzeug\local.py:508: in _get_current_object raise RuntimeError(unbound_message) from None E RuntimeError: Working outside of application context. ``` EDIT: I have added a pytest fixture to bring the flask app into context: ```python @pytest.fixture def app_context(): # create an application context with app.app_context(): # yield the app object yield app ``` but still face the same error
For a project I am trying to retrieve every commit and for each updated file, I want to store the entire file (without the commit syntax, just the vanilla file) and which lines were updated. I am using the Gitlab API in python. Whilst I can get the updated lines, I struggle retrieving the file's complete contents at the time of the commit. Here is a snippet on how I try to retrieve the files, but the issue really only lies in the `__get_file_content` - all else works like I intend it to. def __get_file_content(self, project, commit_id, file_path): try: # Get the file content from a specific commit file_content = project.files.get(file_path=file_path, ref=commit_id) return file_content.decode() except Exception as e: print(f"Error fetching file content: {e}, {file_path}, {commit_id}, {project}") return None def generate_commit_dict(self, commits, project): # Prepare commit diffs into dict to save time when iterating commit_diffs = {} for commit in commits: diffs = commit.diff(get_all=True) commit_dict = [] commit_time = commit.created_at commit_id = commit.short_id # print(diffs) # logging.info(f'Transforming commit {commit_id}...') for diff in diffs: diff_file = diff['diff'] diff_code = self.__get_file_content(project, commit_id, diff['new_path']) diff_updated_code = self.__get_commit_diff_contents(diff_file) commit_dict.append({ 'file': diff['new_path'], 'type': self.get_file_extension(diff['new_path']), 'change': diff_code, 'updated_lines': diff_updated_code }) commit_diffs[commit_id] = { "commit_time": commit_time, "commits": commit_dict } return commit_diffs project = gl.projects.get(int(project_id)) commits = get_commits(gitlab_user, project) user_commit_contents = generate_commit_dict(commits, project)
I'd use `request` rather than `page.goto`: ```js import {test} from "@playwright/test"; // ^1.41.2 const html = `<!DOCTYPE html><html><body> <a href="https://news.ycombinator.com">yc</a> <a href="https://www.example.com">example</a> <a href="https://www.stackoverflow.com">so</a> <a href="https://www.badurlthatdoesntexist.com">bad url</a> </body></html>`; test("all links are valid", async ({page, request}) => { await page.setContent(html); const links = await page.locator("a") .evaluateAll(els => els.map(el => el.href)); for (const link of links) { await request.get(link); } }); ``` (remove `www.badurlthatdoesntexist.com` to see the test pass) To speed this up, you can use a task queue (either hand-rolled or a library). Or, less optimally but with simple code and fewer dependencies, you can iterate in chunks of size N and use `Promise.all` to parallelize each chunk: ```js test("all links are valid", async ({page, request}) => { await page.setContent(html); const links = await page.locator("a") .evaluateAll(els => els.map(el => el.href)); const chunk = 3; for (let i = 0; i < links.length; i += chunk) { await Promise.all(links.slice(i, i + chunk).map(e => request.get(e))); } }); ```
I have an error with php function chmod https://www.php.net/manual/en/function.chmod.php Any solutions? I'm running Linux Debian 13.05 with root access. Php Warning: chmod(): No such file or directory in file: index.php on line: 8 After I added a empty file, I we get: Php Warning: chmod(): Operation not permitted in file: index.php on line: 8 Php Warning: fopen(./access.log): Failed to open stream: Permission denied in file: index.php on line: 9 $root_path = './'; define('LOG_FILE', $root_path . 'access.log'); function add_log_entry($access = '') { if (LOG_FILE) { chmod(LOG_FILE, 0755); $fopen = fopen(LOG_FILE, 'ab'); } }
|c#|entity-framework-core|db-first|
{"Voters":[{"Id":13860,"DisplayName":"Jonathan Hall"},{"Id":2541573,"DisplayName":"jub0bs"},{"Id":32880,"DisplayName":"JimB"}]}
I cannot find a way to add a delay in this topic. The documentation is not useful at all. I tried using [this doc](https://learn.microsoft.com/en-us/microsoft-copilot-studio/authoring-send-event-activities#sending-other-activity-types) and searched many other options, but I'm missing something. It s literally the same as their docs. [enter image description here](https://i.stack.imgur.com/wO34M.png)
How to add a delay activity in Microsoft Copilot Studio
|microsoft-copilot|
null
I'll preface this by saying: **this is absolutely the wrong solution for what you are trying to do.** You should use a proper source control and backup system (backups include all object definitions). The other problem with your use of a trigger and a rollback is that anyone doing DDL inside their own outer transaction is going to be in for a **nasty surprise** when you roll it back. ____ Your primary issue is that you are ending the transaction but not starting a new one. The system expects that there is still an active transaction at the end of the trigger, but you need to roll it back in order to get the definition. So instead start a new transaction. And you don't need to mess around with catching any exceptions then. Also: * You can get the XML data in a single query. You should use `/text()` for better performance. * Use the newer `sys.dm_os_file_exists` function if you need it. * Ideally don't write to the file system at all from T-SQL, instead just write to a table. * You don't need `USE` in the dynamic SQL. You can instead put ` CONCAT(@DatabaseName, '.sys.sp_executesql')` into a variable and do `EXEC @proc`. * To redo the `DROP` just pull out the original command. * Use `sys.sql_modules` instead of `OBJECT_DEFINTIION` for better reliability and locking. * Object names should be in `sysname` type (a synonym for `nvarchar(128)`) and file names should be in `nvarchar(260)`. ```tsql CREATE OR ALTER TRIGGER [trg_DDL_BackupOfRoutines] ON ALL SERVER FOR DROP_TRIGGER , DROP_VIEW , DROP_FUNCTION , DROP_PROCEDURE AS SET NOCOUNT ON; -- prevent recursion IF TRIGGER_NESTLEVEL(@@PROCID, 'AFTER', 'DDL') > 1 RETURN; DECLARE @Event XML = EVENTDATA(); DECLARE @DatabaseName sysname , @SchemaName sysname , @ObjectName sysname , @Command nvarchar(max); SELECT @DatabaseName = EventInstance.value('(DatabaseName/text())[1]', 'sysname'), @SchemaName = EventInstance.value('(SchemaName/text())[1]', 'sysname'), @ObjectName = EventInstance.value('(ObjectName/text())[1]', 'sysname'), @ObjectType = EventInstance.value('(ObjectType/text())[1]', 'NVARCHAR(50)'), @Command = EventInstance.valu('(TSQLCommand/CommandText/text())[1]', 'nvarchar(max)') FROM @Event.nodes('/EVENT_INSTANCE') x1(EventInstance); ---------------------------------------------------------------------------------------------------- -- Get object definition ROLLBACK; DECLARE @proc NVARCHAR(1000) = CONCAT(@DatabaseName, '.sys.sp_executesql'); EXEC @proc N' INSERT master.sys.DefinitionBackup (SchemaName, ObjectName, Definition) SELECT @SchemaName, @ObjectName, m.definition FROM sys.objects o JOIN sys.schemas s ON s.schema_id = o.schema_id JOIN sys.sql_modules m ON m.object_id = o.object_id WHERE s.name = @SchemaName AND o.name = @ObjectName; ' , N'@ObjectName sysname, @SchemaName sysname', , @ObjectName = @ObjectName , @SchemaName = @SchemaName; ---------------------------------------------------------------------------------------------------- -- Begin a new transaction and continue with the drop BEGIN TRAN; EXEC @proc @Command; ---------------------------------------------------------------------------------------------------- -- Define output message DECLARE @OutputMessage NVARCHAR(MAX) = CONCAT(' The object has been removed. A backup of file was automatically generated, can you find it in the following path: ', @FileName, ' Please ignore the message: "The transaction ended in the trigger. The batch has been aborted." The error message is generated because the initial "DROP" statement is canceled to get the object definition and then the "DROP" statement is executed again. '); PRINT @OutputMessage; ```
I'm trying to show/hide more info about names on click, but don't know how to make it work of a single click. I'm fetching JSON that has game related info and creating a div with JS like so: fetch("thething.json") .then(res => res.json()) .then(data =>{ games = data.map(game => { const newDiv = document.createElement("div"); newDiv.className = "game-info"; newDiv.innerHTML = ` <p onclick="toggler()";>${game.Name}</p> <div class="info"> <img src="${game.Thumbnail}"> <p>${game.Description}</p> </div> `; document.body.appendChild(newDiv); JSON itself has a bunch of info on games in structure like this: [{"Name: gamename1, "Thumbnail: https://imglocation, "Description": This game is indeed a game }, {"Name: gamename2, "Thumbnail: https://imglocation2, "Description": This game2 is indeed a game2 } ] The toggler function is written like this: function toggler(){ $('div.game-info').click(function(e){ $(this).children('.info').toggle(); }); } It kinda works, but it takes one or more clicks to view the additional info. I know the problem is due to multiple onclick calls, but don't know how to make it with a single click. I tried using jQuery without the toggler function, but then it opens info on all the names and not just the one which was clicked. So if someone could either tell me how to get rid of that secondary onclick or how to properly target the info which I clicked in the innerHTML section that'd be great!
I am trying to use the tool lanchain and langserver, and create a api from a template. Steps I took: 1. create server template ``` langchain app new my-app --package rag-conversation ``` 2. copy in the code provided in the cmd after the installation is ready ``` @app.get("/") async def redirect_root_to_docs(): return RedirectResponse("/docs") add_routes(app, rag_conversation_chain, path="/rag-conversation") ``` 3. cd into my-app folder and run ```langchain serve``` in the cmd. After which the server cant seem to start and throws this error ``` INFO: Will watch for changes in these directories: INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) INFO: Started reloader process [10212] using WatchFiles ERROR: Error loading ASGI app. Could not import module "app.server". ``` Does anyone know how to aproach this issue. There is no -v command option so this is all the information I have to go by. The project structure is the following: [![project structure][1]][1] this shows that app does contain the server file. and the server file has the follwing information inside: ``` from fastapi import FastAPI from fastapi.responses import RedirectResponse from langserve import add_routes from rag_conversation import chain as rag_conversation_chain app = FastAPI() @app.get("/") async def redirect_root_to_docs(): return RedirectResponse("/docs") add_routes(app, rag_conversation_chain, path="/rag-conversation") if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` If you know a solution for this issue please be descriptive, I have found some people encountering something similar a few months ago,[https://github.com/langchain-ai/opengpts/issues/61][2], but there they were talking about some dependency issue and requirements.txt of which I can't even find in my app structure. [1]: https://i.stack.imgur.com/L5nq4.png [2]: https://github.com/langchain-ai/opengpts/issues/61
My query ``` select custname, case when date < '11/26/2023' then -1 else datepart(wk,date) end 'week#', sum(amount) sales, count(salesid) orders from SalesTable inner join CustomerTable c on salestable.CustID=c.CustID where date < '1/27/2024' and c.CustID = 10285 or c.CustID = -2 group by c.custid,custname, [address],case when date < '11/26/2023' then -1 else datepart(wk,date) end, case when date < '11/26/2023' then '11/25/2023' else DATEADD(dd,7-(DATEPART(dw,date)),date) end order by 1,2 ``` Gets all customers sales (sum amount, week number, amount orders) by one week any row. like: | custname | week# | sales | orders | |---------|----------|----------|-----------| |CustAAA | -1 | 974697.41 | 62013 | |CustAAA |1| 10.01 | 5 | |CustAAA |2| 10 |2| |CustAAA |2| 372.95| 11| |CustAAA |3| 70.86| 13| |CustAAA |3| 0| 3| |CustAAA |4| 8.08| 2| |CustAAA |5| 20 |6| |CustAAA |48| 0 |38| |CustAAA |49 |84.27| 2| |CustXYZ |-1 |12.12| 1| |CustXYZ |1 |22.59| 1| |CustXYZ |4 |117.9| 1| |CustXYZ |48 |19.3| 1| [enter image description here](https://i.stack.imgur.com/3qC7j.png) How do I PIVOT one row per customer, 'week' number as column-\> amount, 'week' number as column -\> orders. and again the next week number, like example: [enter image description here](https://i.stack.imgur.com/Z8sHj.png)
I am trying to convert a Jsonobj that i fetched using async/await into an array, i have been doing it like this: ``` const apiurl = 'https://api.wheretheiss.at/v1/satellites/25544'; async function getArray(){ var arr= []; const response = await fetch(apiurl); const jsonobject = await response.json(); for(var i in jsonobject){ arr.push(i, jsonobject[i]); } console.log(arr); } ``` I am trying to make an object inside an array for every marker that is saved on my json, right now the json contains one marker, with the coordinates of the ISS. For example: {['iss','coordinates'],['harz',coordinates],...} The code gives me an array of length 26 looking like this: ['name','iss','lat','45'....], but i would like an array of length 1 looking like this: ['name:iss, Lat:45, Lng:32'], the array should contain 1 element for every marker that im saving in my json.
I' attempting to build QT5 for use in a Beaglebone black in a Ubuntu 22.04 Virtualbox VM. I'm attempting to follow the guide here to simplify this process as much as possible: https://github.com/K3tan/BBB_QT5_guide?tab=readme-ov-file I'm coming up against a brick wall though and don't know how to get around it. I've used the configuration line ./configure -platform linux-g++ -release -device linux-beagleboard-g++ -sysroot /usr/local/linaro/sysroot -prefix ~/Qt5ForBBB -hostprefix ~/Qt5forBBB -device-option CROSS_COMPILE=/usr/local/linaro/linaro-gcc/bin/arm-linux-gnueabihf- -nomake tests -nomake examples -no-opengl -opensource -confirm-license -reduce-exports -make libs which seemed to complete without any errors. When the configuration script completed, it said something along the lines of "run gmake to build". So I did. Unfortunately, I'm running into the following issue: In file included from /home/tim/qt-everywhere-src-5.15.2/qtlocation/src/location/declarativemaps/qdeclarativepolylinemapitem.cpp:38:0: /home/tim/qt-everywhere-src-5.15.2/qtlocation/src/location/declarativemaps/qdeclarativepolylinemapitem_p_p.h:381:17: error: β€˜const char* MapPolylineShaderLineStrip::vertexShader() const’ marked β€˜override’, but does not override const char *vertexShader() const override { This is but one of many of the same type of error. All of the errors are const marked 'override'. I've also got this error home/tim/qt-everywhere-src-5.15.2/qtlocation/include/QtLocation/5.15.2/QtLocation/private/../../../../../src/location/declarativemaps/qdeclarativepolygonmapitem_p_p.h: In member function β€˜virtual void MapPolygonShader::initialize()’: /home/tim/qt-everywhere-src-5.15.2/qtlocation/include/QtLocation/5.15.2/QtLocation/private/../../../../../src/location/declarativemaps/qdeclarativepolygonmapitem_p_p.h:186:23: error: β€˜program’ was not declared in this scope m_matrix_id = program()->uniformLocation("qt_Matrix"); I would have thought that this would be a straightforward boilerplate process, but that does not appear to be the case. Does anyone have any idea what I can do to resolve these errors?
Langserver could not import module app.server
|python|dependencies|fastapi|langchain|py-langchain|
Bonjour, amis. I'm trying to launch a simple example of AspectJ code inside the Gradle project. build.gradle: \` plugins { id 'java' id "io.freefair.aspectj" version "5.1.1" } group = 'org.example' version = '1.0-SNAPSHOT' repositories { mavenCentral() } sourceSets { main { java { srcDirs = ['src/main/java','src/main/aspectj'] } } test { java { srcDirs = ['src/test/java'] } } } dependencies { testImplementation platform('org.junit:junit-bom:5.9.1') testImplementation 'org.junit.jupiter:junit-jupiter' testImplementation group: 'org.assertj', name: 'assertj-core', version: '3.23.1' testImplementation group: 'org.aspectj', name: 'aspectjweaver', version: '1.9.6' implementation group: 'org.aspectj', name: 'aspectjweaver', version: '1.9.6' implementation group: 'org.aspectj', name: 'aspectjrt', version: '1.9.6' testImplementation group: 'org.aspectj', name: 'aspectjrt', version: '1.9.6' testImplementation group: 'org.hamcrest', name: 'hamcrest', version: '2.2' testImplementation 'junit:junit:4.13.1' } test { useJUnitPlatform() } ` In the folder 'src/main/aspectj' class org.example.aspectj.DemoAspect \` package org.example.aspectj; import org.aspectj.lang.JoinPoint; import org.aspectj.lang.annotation.After; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Before; @Aspect public class DemoAspect { @Before("execution(* *(..)) && !within(org.example.aspectj.DemoAspect)") public void logEnter(JoinPoint joinPoint) { System.out.println("!!!!!!DemoAspect"); System.out.print(joinPoint.getStaticPart()); System.out.print(" -> "); System.out.println(joinPoint.getSignature()); } } \` In the folder 'src/main/java' class org.example.aspectj.Main \` package org.example.aspectj; import java.util.Date; public class Main { public static void main(String[] args) { System.out.println("new Date() -> " + new Date()); } }` I'm running Main class. Code from DemoAspect was not called. What is wrong in configuration? Thanks. I try to launch Main class and see the answer from DemoAspect class.
Why AspectJ doesn't catch event?
|java|gradle|testing|aspectj|
null
I have a list of collapsible divs that is populated with data from a geoJson file and updated dynamically when you zoom in and out of a map, to reflect the markers that are within the bounds of the map. When you click on the collapsible div it opens to show details about the feature. I would like to add a link/button underneath the information shown that says 'Zoom To Marker', that zooms the map to the marker location when it is clicked. I have tried many ways of doing this but whatever I do the `ZoomTo()` function I call is always `undefined`. The `ZoomTo()` function is above the function in the external javascript file that creates the list and link. This is driving me crazy, it seems like it should be so simple. The link shows ok but when it is clicked on the following error occurs >Uncaught ReferenceError: ZoomTo is not defined <anonymous> javascript:ZoomTo(extent);:1 I have tried a variety of suggestion from StackOverflow but to no avail, making it global, having the function above, using buttons and apending them. The functions are within an `initMap` function that is called when the window loads FeatureType function is called from eventhandlers when the map changes. <!-- language: lang-js --> function ZoomTo(extent) { //map.getView().fit(extent,{padding: [100, 100, 100, 100],maxZoom:15, duration:500}); alert(extent); } function FeatureType(mapfeaturetype, mapExtent) { htmlStr = "<div class='accordion' id=" accordianExample ">"; // iterate through the feature array for (var i = 0, ii = mapfeaturetype.length; i < ii; ++i) { var featuretemp = mapfeaturetype[i]; // get the geometry for each feature point var geometry = featuretemp.getGeometry(); var extent = geometry.getExtent(); extent = ol.proj.transformExtent(extent, 'EPSG:3857', 'EPSG:4326'); //If the feature is within the map view bounds display its details //in a collapsible div in the side menu var inExtent = (ol.extent.containsExtent(mapExtent, extent)); if (inExtent) { htmlStr += "<div class='accordion-item'><div class='accordion-header' id='oneline'>" htmlStr += "<span class='image'><img src=" + imgType + "></span><span class='text'>" htmlStr += "<a class='btn' data-bs-toggle='collapse' data-bs-target='#collapse" + i + "' href='#collapse" + i + "" htmlStr += "aria-expanded='false' aria-controls='collapse" + i + "'>" + featuretemp.get('Name') + "</a></span>" htmlStr += "</div><div id='collapse" + i + "' class='accordion-collapse collapse' data-bs-parent='#" + accordionid + "'>" htmlStr += "<div class='accordion-body'><h3>" + featuretemp.get('Address') + "</h3><h3>" + featuretemp.get('ContactNo') + "</h3>" htmlStr += "<h5><a href=" + featuretemp.get('Website') + " target='_blank'> " + featuretemp.get('Website') + " </a></h5>" htmlStr += "<h5>" + featuretemp.get('Email') + "</h5><h5>" + featuretemp.get('Descriptio') + "</h5>" htmlStr += "<div id='zoom'><a href='javascript:ZoomTo(extent);'>Zoom to Marker</a></div>" htmlStr += "</div></div></div>"; }; //end if }; //end loop htmlStr += "</div>" document.getElementById("jsoncontent").innerHTML += htmlStr }
This seems wrong: .where('title', // Filter products by title based on searchText isGreaterThanOrEqualTo: widget.searchText.toLowerCase(), isLessThanOrEqualTo: widget.searchText.toLowerCase() + 'z') As far as I know, the correct syntax is: .where('title', isGreaterThanOrEqualTo: widget.searchText.toLowerCase()), .where('title', isLessThanOrEqualTo: widget.searchText.toLowerCase() + 'z')) you'll also wan to make sure that you have the [required index](https://firebase.google.com/docs/firestore/query-data/indexing), as what is needed here isn't auto-created. --- Your builder fails to handle errors from the Firestore stream, which is probably why you don't see any errors. **Every** builder should start with something like this: ``` builder: (context, snapshot) { if (snapshot.hasError) { print('ERROR: ${snapshot.error}'); return Text('ERROR: ${snapshot.error}'); } print('this: ${widget.searchText}'); print('$snapshot'); if (snapshot.connectionState == ConnectionState.waiting) { ... ```
null
After Chrome 123 update, my DevTools font has been changed to 'Monospace'. Is there any way to rollback my DevTools font to any readable font? ![Monospace font in Chrome DevTools](https://i.stack.imgur.com/ClMXo.png) I've tried using [DevTools Font Changer](https://chromewebstore.google.com/detail/devtools-font-changer/fikbcnlbgoafooldbkgikejejhaddajg). But it didn't changed my console font: ![Monospace font still in Chrome DevTools console input](https://i.stack.imgur.com/YswQd.png)
Chrome DevTools font has been changed to Monospace after update
null
I have a table called "Contact" where I have ID and Status of the contact , then I have another table called "Product" where I have the contact ID, product status and ProductID. I'm trying to find contacts which doesn't have any active product. How do I do that. I have used a query below , `Select c.ID from Contact c where c.id not in (select p.contactid from product p where p.product_status='Active')` but i'm also getting both ID 1234 and 1223. Where in theory I should only get 1223. How do I tweak my script to bring contacts which doesn't have any active product. Contact table [![Contact Table][1]][1] Product Table [![Product table][2]][2] [1]: https://i.stack.imgur.com/ZQ46m.png [2]: https://i.stack.imgur.com/nsRCE.png
Query to bring member without any active product
|sql|sql-server|
So I just got into WebGL graphics a few months ago and I wrote a class to add a filter to any render. The only problem with it was, the framebuffer texture always returns black. I don't know why and it seems like there isn't more than a handful of examples using multiple shaders so it would help a lot if y'all know anything about this. Here's my code: ``` export { FXFilter } import { Shader } from "./shader.js"; class FXFilter { constructor(gl,vss,fss) { // Create a texture to render the triangle to this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, this.texture); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 400, 400, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); // Create a framebuffer and attach the texture this.framebuffer = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, this.texture, 0); this.quadbuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, this.quadbuffer); var x1 = -1; var x2 = 1; var y1 = -1; var y2 = 1; gl.bufferData( gl.ARRAY_BUFFER, new Float32Array([ x1, y1, x2, y1, x1, y2, x1, y2, x2, y1, x2, y2, ]), gl.STATIC_DRAW); this.shader = new Shader(vss,fss); this.shader.setup(gl); this.resolutionLocation = gl.getUniformLocation(this.shader.program, "resolution"); this.positionAttributeLocation = gl.getAttribLocation(this.shader.program, "position"); gl.enableVertexAttribArray(this.positionAttributeLocation); } render(gl) { gl.useProgram(this.shader.program); gl.bindTexture(gl.TEXTURE_2D, this.texture); gl.bindFramebuffer(gl.FRAMEBUFFER, null); gl.uniform2f(this.resolutionLocation, gl.canvas.width, gl.canvas.height); gl.bindBuffer(gl.ARRAY_BUFFER, this.quadbuffer); gl.vertexAttribPointer(this.positionAttributeLocation, 2, gl.FLOAT, false, 0, 0); gl.drawArrays(gl.TRIANGLES, 0, 6); } } ``` I tried to rearrange the order that I bound the buffers but that gave the same result. Here's my vertex shader: attribute vec2 position; void main() { gl_Position = vec4(position, 0.0, 1.0); } Here's my fragment shader: precision highp float; uniform sampler2D u_texture; uniform vec2 resolution; void main() { // convert the rectangle from pixels to 0.0 to 1.0 vec2 zeroToOne = gl_FragCoord.xy / resolution; // convert from 0->1 to 0->2 vec2 zeroToTwo = zeroToOne * 2.0; // convert from 0->2 to -1->+1 (clipspace) vec2 clipSpace = zeroToTwo - 1.0; gl_FragColor = texture2D(u_texture, clipSpace); }
QT5 cross compile for beaglebone black installation error
|ubuntu|qt5|virtual-machine|cross-compiling|beagleboneblack|
I’m attempting to acquire the temperature value from a Claxon-CXP1 connected to a Basler VNIR hyperspectral camera . I’m following these documents and either not opening the board and or getting a null node result when I expect a value. https://www.bitflow.com/PythonHelp/_generate/BFModule.BFGTLUtils.BFGTLDevice.html#BFModule.BFGTLUtils.BFGTLDevice https://www.bitflow.com/PythonHelp/_generate/BFModule.BFGTLUtils.BFGTLDevice.html#BFModule.BFGTLUtils.BFGTLDevice.Open https://www.bitflow.com/PythonHelp/_generate/BFModule.BFGTLUtils.BFGTLDevice.html#BFModule.BFGTLUtils.BFGTLDevice.getNode Cameras and frame grabbers are powered up, operational, and produce data via other applications. I’m just not sure exactly how to get the temperature. This code was run in isolation without any other apps/drivers running. <strike>Why is the board not opening?</strike>Edited:It was my code, forgot to check isopen again. Why is the temp node invalid and null? Guidance, docs, and code examples would be appreciated. import typing import time import os import sys # Specify DLL file locations for import of BitFlow and CameraLink libraries os.add_dll_directory(r"C:\BitFlow SDK 6.5\Bin64") os.add_dll_directory(r"C:\Program Files\CameraLink\Serial") import BFModule.BFGTLUtils as BfUtils # pylint: disable=no-name-in-module, wrong-import-order def get_vnir_temp()->None: """_summary_ https://www.bitflow.com/PythonHelp/_generate/BFModule.BFGTLUtils.BFGTLDevice.html#BFModule.BFGTLUtils.BFGTLDevice Returns: typing.Optional[float]: _description_ """ device = BfUtils.BFGTLDevice() print(f'bordCount: {device.boardCount()}') is_open = device.isOpen() print(f'isOpen:{is_open}') if is_open is not True: print('opening device') device.Open(1) is_open = device.isOpen() print(f'isOpen:{is_open}') time.sleep(.5) try: temp_node = device.getNode('DeviceTemperature') print(f'NodeName:{temp_node.DisplayName}') print(f'Valid:{temp_node.Valid}') print(f'IsNull:{temp_node.isNull}') except Exception: exc_type, exc_obj, exc_tb = sys.exc_info() fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1] print(f"Exception caught: {exc_type}") print(f" \\__ file: {fname}, line #{exc_tb.tb_lineno}") get_vnir_temp() """ output: Qt: Untested Windows version 10.0 detected! bordCount: 0 isOpen:0 opening device isOpen:0 NodeName:Device Temperature Valid:False IsNull:True """
``` import xlwings as xw #opens instance invisible app = xw.App(visible=False) book = xw.apps[app.pid].books.open(filepath) ###do stuff book.save() book.close app.quit() ``` This works for me perfectly. It opens a new excel instance and only uses this instance for the changes by Python. Like this i can open a manual instance where i can work while the program is Running without interfering with it.
I have this Python class and want your opinion on whether this `cls._instance._cache = {}` is thread safe for `tornado`? if not how can I handle this cache to be thread safe? import logging import aiohttp import time # Constants DEFAULT_TIMEOUT = 20 MAX_ERRORS = 3 HTTP_READ_TIMEOUT = 1 class HTTPRequestCache: _instance = None def __new__(cls): if cls._instance is None: cls._instance = super().__new__(cls) # TODO: check whether its tread safe with tornado event loop cls._instance._cache = {} cls._instance._time_out = DEFAULT_TIMEOUT cls._instance._http_read_timeout = HTTP_READ_TIMEOUT cls._instance._loop = None return cls._instance async def _fetch_update(self, url): try: async with aiohttp.ClientSession() as session: logging.info(f"Fetching {url}") async with session.get(url, timeout=self._http_read_timeout) as resp: resp.raise_for_status() resp_data = await resp.json() cached_at = time.time() self._cache[url] = { "cached_at": cached_at, "config": resp_data, "errors": 0 } logging.info(f"Updated cache for {url}") except aiohttp.ClientError as e: logging.error(f"Error occurred while updating cache for {url}: {e}") async def get(self, url): if url not in self._cache or self._cache[url]["cached_at"] < time.time() - self._time_out: await self._fetch_update(url) return self._cache.get(url, {}).get("config")
|powershell|pdflatex|tab-completion|
I'm stuck on a problem where i have typescript code that validates a submission form. In this form, an upload field was added where the user can upload multiple files. And a type of visual list was also created for the user to add and remove files. When I click to select the files, and select them all at once in the file explorer, they appear correctly in the array. Now when I add one at a time, only the last file added within the array appears. So I believe that at some point the Array is being overwritten or redefined. [enter image description here](https://i.stack.imgur.com/zv8dL.png) --- This is the section of code responsible for creating the array and validating the files. ``` const uploadField = document.getElementById('00N1Q00000Tnupu') as HTMLInputElement; //id do campo de upload const fileListDiv = document.getElementById('fileList') as HTMLDivElement; const filesArray: File[] = []; uploadField?.addEventListener('change', function () { const files = uploadField.files; if (files) { for (let i = 0; i < files.length; i++) { const file = files[i]; const fileName = file.name; const fileSizeMB = file.size / (1024 * 1024); const allowedFormats = ['jpg', 'png', 'pdf', 'docx', 'xlsx', 'zip']; const fileExtension = fileName.split('.').pop()?.toLowerCase(); if (fileSizeMB > 10 || !allowedFormats.includes(fileExtension || '')) { const erroDiv = document.querySelector('.feedback[data-input="00N1Q00000Tnupu"]'); erroDiv.classList.remove('hidden'); erroDiv.classList.add('error'); const erroDivSpan = document.querySelector('.feedback[data-input="00N1Q00000Tnupu"] span'); // eslint-disable-next-line max-len erroDivSpan.innerHTML = 'Please check the uploaded file(s). Make sure that all files are in the allowed formats (jpg, png, pdf, docx, xlsx, zip) and the total size of the files does not exceed 10MB.'; return; } filesArray.push(file); const fileItem = document.createElement('div'); fileItem.textContent = fileName; const removeButton = document.createElement('i'); removeButton.classList.add('icon-trash'); removeButton.style.color = '#a71900'; removeButton.style.marginLeft = '5px'; removeButton.style.cursor = 'pointer'; removeButton.addEventListener('click', () => { const index = filesArray.indexOf(file); if (index !== -1) { filesArray.splice(index, 1); } fileListDiv.removeChild(fileItem); }); fileItem.appendChild(removeButton); fileListDiv.appendChild(fileItem); } } const erroDiv = document.querySelector('.feedback[data-input="00N1Q00000Tnupu"]'); erroDiv.classList.add('hidden'); erroDiv.classList.remove('error'); }); ``` I had some attempts to store this array separately but without success
Upload field overwriting file array
I have two different ranges in excel and I want to save the ranges in one image. I use python. I tried to use union but It throws the exception "<unknown>.Union". Thank you for your help. <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-html --> excel = win32com.client.Dispatch('Excel.Application') excel.visible = False wb = excel.Workbooks.Open(self.source_path) ws = wb.Worksheets[0] #This is what I tried #ws.Union(ws.Range("B843:CZ847"), ws.Range("B4:CZ5")).Copy() img = ImageGrab.grabclipboard() imgFile = os.path.join(self.excel_target,self.excel_name) img.save(imgFile) <!-- end snippet -->
null
I am trying to filter a tibble into two tibbles. My data involves a column where the data increases up to a value, and then decreases again. I checked: https://stackoverflow.com/questions/72320734/filter-items-based-on-whether-the-amount-is-increasing-or-decreasing But this doesn't really work for my problem - I want to filter on a row by row basis. I tried: ``` rawfilt <- raw %>% mutate(status=case_when( first(V) < last(V) ~ "Increasing", last(V) < first(V) ~ "Decreasing")) ``` But this just gives that everything is decreasing. (as the last value is lower than the first. My data looks like: |t|V| |:-|-:| |1|2| |2|3| |3|4| |4|3|
null
Id would go with a `CanActivate` guard. canActivate() { if(isMobile) { return this.router.createUrlTree(['list']); } else { return true } } [1]: https://angular.io/api/router/CanMatchFn
There is a property for this scenario. When combined with the `showTime`, the `hideOnDateTimeSelect` will hide the overlay on a date selection. So, make sure `hideOnDateTimeSelect` is set to true and you will have the wanted behavior without dealing with refs From the docs: https://primereact.org/calendar/#api.Calendar.props.hideOnDateTimeSelect
I need to import a dump-file into an Oracle Database. I set myself up with a test VM and installed the latest Oracle Database 21c Express Edition and used: impdp system/123 directory=C:/dump dumpfile=test.dmp logfile=import.log full=y Which output an error message that it can not find the directory and logfiles can not be created. I found that I can either set up a directory in Oracle or move this to a different location. So I moved everything to "C:\app\textVM\product\21c\admin\XE\dpdump" and ran said import command again... It came back with error messages about users can not be created and all the tables can not be created cause of missing users. (I translated errors into english) ``` Objekttype SCHEMA_EXPORT/USER is being processed ORA-39083: Objekttype USER:"UserA" could not be created, Error: ORA-65096: Invalid common user or role name Wrong SQL is: CREATE USER "UserA" IDENTIFIED BY VALUES 'S:123...456;78..90' DEFAULT TABLESPACE "UserA" TEMPORARY TABLESPACE "TEMP" ORA-39083: Objekttype USER:"UserB" could not be created, Error: ORA-65096: Invalid common user or role name Wrong SQL is: CREATE USER "UserB" IDENTIFIED BY VALUES 'S:123...456;78..90' DEFAULT TABLESPACE "UserB" TEMPORARY TABLESPACE "TEMP" ORA-39083: Objekttype USER:"UserC" could not be created, Error: ORA-65096: Invalid common user or role name Wrong SQL is: CREATE USER "UserC" IDENTIFIED BY VALUES 'S:123...456;78..90' DEFAULT TABLESPACE "UserC" TEMPORARY TABLESPACE "TEMP" Objekttype SCHEMA_EXPORT/SYSTEM_GRANT is being processed ORA-39083: Objekttype SYSTEM_GRANT could not be created, Error: ORA-01917: User or function UserB does not exist ... More Errors all stating the User or function does not exist ``` I need to figure out how to fix this... Apparently I can not create any Users. Did some digging and I guess I can not Import with the System Account but have to make a different user account to import something. Correct? So I tried creating a import user to make an import. Not working, same Error. Looked it up and I guess I can not create User in a CDB but have to do that in a PDB... Did some googling and I found a way to create a user. I opened sqlplus as system/123 ``` >alter session set container=XEPDB1; Session altered >create user import identified by 123; User created >grant connect to import; User access (Grant) has been granted. >grant all privileges to import identified by 123; User access (Grant) has been granted. >create directory importDir as C:/dump Directory created ``` Flawless I guess. Ok so I can now make the import on my new user right? impdp import/123 directory=importDir dumpfile=test.dmp logfile=import.log full=Y Not so fast said Oracle: UDI-01017: Process invoked Oracle Error 1017 ORA-01017: Username/Password not correct; Login rejected So Google again and it said define the PDB with it? impdp import/123@XEPDB1 directory=importDir dumpfile=test.dmp logfile=import.log full=Y Now it said: UDI-12154: Process invoked Oracle Error 12154 ORA-12154: TNS: Specified Connect Identifier could not be resolved So what am I missing? Is the whole process wrong? All I want is to connect to my Oracle Server and send a whole bunch of SQL Queries.
Export two different excel ranges to one foto in python
|python|excel|
Superset beginner here... So I have a decent size of data (flow measurements) that needs to be interpreted differently depending on the measuring device collecting the data. For most part of the data, it is straightforward. AVG m3/h, to get a daily value I just sum avg*24 to get a sum. But a few measuring devices works differently. They are pulsed base which means that the aggregate up during the day and then reset for the next day. To get a sum for a daily value I then need to use a MAX grouped by day (values are per minute). A simple solution would be to manage this when displaying the data in superset, that would allow me to retain my high granularity data (per minute) where it's needed but also provide sum per day and month. I'm wondering if this could be achieved with a metric SQL specific (in a table for instance)? Say, when unit = m3/day THEN value*24 else unit=m3/day then MAX(value). This also needs to be grouped by day. I don't really get the syntax needed for this. Maybe it's because I confuse it with how I code in the load screen. I also don't really get how a grouping here would relate to the granularity chosen for the created table. Any advice? I have tried a different approach doing this back-end but it is a clumsy solution...
I get this error when running npm run dev: ``` PS C:\Users\19043\node_projects\nortech_retail> npm run dev > nortech_retail@0.1.0 dev > next dev β–² Next.js 14.1.3 - Local: http://localhost:3000 - Environments: .env.local TypeError [ERR_INVALID_ARG_TYPE]: The "to" argument must be of type string. Received undefined at new NodeError (node:internal/errors:405:5) at validateString (node:internal/validators:162:11) at Object.relative (node:path:498:5) at Watchpack.<anonymous> (C:\Users\19043\node_projects\nortech_retail\node_modules\next\dist\server\lib\router-utils\setup-dev-bundler.js:1420:55) at Watchpack.emit (node:events:514:28) at Watchpack._onTimeout (C:\Users\19043\node_projects\nortech_retail\node_modules\next\dist\compiled\watchpack\watchpack.js:1:37727) at listOnTimeout (node:internal/timers:569:17) at process.processTimers (node:internal/timers:512:7) { code: 'ERR_INVALID_ARG_TYPE' } TypeError [ERR_INVALID_ARG_TYPE]: The "to" argument must be of type string. Received undefined at new NodeError (node:internal/errors:405:5) at validateString (node:internal/validators:162:11) at Object.relative (node:path:498:5) at Watchpack.<anonymous> (C:\Users\19043\node_projects\nortech_retail\node_modules\next\dist\server\lib\router-utils\setup-dev-bundler.js:1420:55) at Watchpack.emit (node:events:514:28) at Watchpack._onTimeout (C:\Users\19043\node_projects\nortech_retail\node_modules\next\dist\compiled\watchpack\watchpack.js:1:37727) at listOnTimeout (node:internal/timers:569:17) at process.processTimers (node:internal/timers:512:7) { code: 'ERR_INVALID_ARG_TYPE' } ``` I get this error when running npm start: ``` PS C:\Users\19043\node_projects\nortech_retail> npm run start > nortech_retail@0.1.0 start > next start β–² Next.js 14.1.3 - Local: http://localhost:3000 [Error: ENOENT: no such file or directory, open 'C:\Users\19043\node_projects\nortech_retail\.next\BUILD_ID'] { errno: -4058, code: 'ENOENT', syscall: 'open', path: 'C:\\Users\\19043\\node_projects\\nortech_retail\\.next\\BUILD_ID' } PS C:\Users\19043\node_projects\nortech_retail> ``` npm run build prints: ``` PS C:\Users\19043\node_projects\nortech_retail> npm run build > nortech_retail@0.1.0 build > next build β–² Next.js 14.1.3 - Environments: .env.local Creating an optimized production build ... βœ“ Compiled successfully ./components/image/LinkUnder.js 13:7 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element 13:7 Warning: img elements must have an alt prop, either with meaningful text, or an empty string for decorative images. jsx-a11y/alt-text ./components/image/ShuffleGallery.js 11:7 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element 12:7 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element 13:7 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element ./components/image/SlideGallery.js 39:6 Warning: React Hook useEffect has a missing dependency: 'handleTransitionEnd'. Either include it or remove the dependency array. react-hooks/exhaustive-deps 58:6 Warning: React Hook useEffect has missing dependencies: 'startInterval' and 'stopInterval'. Either include them or remove the dependency array. react-hooks/exhaustive-deps 63:9 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element 63:9 Warning: img elements must have an alt prop, either with meaningful text, or an empty string for decorative images. jsx-a11y/alt-text ./components/product/ProductContainer.js 28:24 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element 28:24 Warning: img elements must have an alt prop, either with meaningful text, or an empty string for decorative images. jsx-a11y/alt-text ./src/app/(admin)/admin/add/components/GetOptionsSelect.js 31:8 Warning: React Hook useEffect has a missing dependency: 'table'. Either include it or remove the dependency array. react-hooks/exhaustive-deps ./src/app/(admin)/admin/add/components/GetOptionsSelectWJoin.js 35:8 Warning: React Hook useEffect has missing dependencies: 'join' and 'table'. Either include them or remove the dependency array. react-hooks/exhaustive-deps ./src/app/(admin)/admin/add/components/PhotoUploadForm.js 25:5 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element 25:5 Warning: img elements must have an alt prop, either with meaningful text, or an empty string for decorative images. jsx-a11y/alt-text ./src/app/(admin)/admin/add/components/SetProductType.js 47:8 Warning: React Hook useEffect has a missing dependency: 'setError'. Either include it or remove the dependency array. If 'setError' changes too often, find the parent component that defines it and wrap that definition in useCallback. react-hooks/exhaustive-deps 54:8 Warning: React Hook useEffect has missing dependencies: 'configMap' and 'setConfig'. Either include them or remove the dependency array. If 'setConfig' changes too often, find the parent component that defines it and wrap that definition in useCallback. react-hooks/exhaustive-deps ./src/app/(customer)/checkout/payment/PaymentCollection.js 45:6 Warning: React Hook useEffect has missing dependencies: 'clientSecret' and 'paymentAttempted'. Either include them or remove the dependency array. react-hooks/exhaustive-deps ./src/app/(customer)/products/all/components/ProductThumbnail.js 21:9 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element 23:11 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element ./src/app/(customer)/uiTest/ProductContainerImageGallery.js 13:9 Warning: Using `<img>` could result in slower LCP and higher bandwidth. Consider using `<Image />` from `next/image` to automatically optimize images. This may incur additional usage or cost from your provider. See: https://nextjs.org/docs/messages/no-img-element @next/next/no-img-element 13:9 Warning: img elements must have an alt prop, either with meaningful text, or an empty string for decorative images. jsx-a11y/alt-text 29:37 Warning: Image elements must have an alt prop, either with meaningful text, or an empty string for decorative images. jsx-a11y/alt-text info - Need to disable some ESLint rules? Learn more here: https://nextjs.org/docs/basic-features/eslint#disabling-rules βœ“ Linting and checking validity of types βœ“ Collecting page data Generating static pages (17/19) [=== ] [ 19, 16, 17, 18 ] βœ“ Generating static pages (19/19) βœ“ Collecting build traces βœ“ Finalizing page optimization Route (app) Size First Load JS β”Œ β—‹ / 4.51 kB 120 kB β”œ β—‹ /_not-found 885 B 85.5 kB β”œ β—‹ /admin 140 B 84.7 kB β”œ β—‹ /admin/add 20.3 kB 129 kB β”œ β—‹ /admin/products 2.05 kB 111 kB β”œ β—‹ /admin/querytest 25.7 kB 163 kB β”œ β—‹ /cart 3.11 kB 119 kB β”œ β—‹ /checkout/complete 862 B 85.5 kB β”œ β—‹ /checkout/intent 3.55 kB 112 kB β”œ β—‹ /checkout/payment 7.59 kB 92.2 kB β”œ β—‹ /help 140 B 84.7 kB β”œ β—‹ /order 5.2 kB 89.8 kB β”œ β—‹ /orderstatus 676 B 85.3 kB β”œ β—‹ /products/all 347 B 84.9 kB β”œ β—‹ /uiLaboratory 879 B 114 kB β”” β—‹ /uiTest 2.03 kB 93.4 kB + First Load JS shared by all 84.6 kB β”œ chunks/69-46f60afa782793ea.js 29 kB β”œ chunks/fd9d1056-7573f80532d3a28d.js 53.4 kB β”” other shared chunks (total) 2.17 kB Route (pages) Size First Load JS β”Œ Ξ» /api/neworder 0 B 79.2 kB β”œ Ξ» /api/order/confirm 0 B 79.2 kB β”œ Ξ» /api/order/confirm-old 0 B 79.2 kB β”œ Ξ» /api/order/intent 0 B 79.2 kB β”œ Ξ» /api/order/status 0 B 79.2 kB β”œ Ξ» /api/shipstation 0 B 79.2 kB β”” Ξ» /api/stripe/payment-intent 0 B 79.2 kB + First Load JS shared by all 79.2 kB β”œ chunks/framework-aec844d2ccbe7592.js 45.2 kB β”œ chunks/main-bd7ddb9be6964a65.js 31.8 kB β”” other shared chunks (total) 2.14 kB β—‹ (Static) prerendered as static content Ξ» (Dynamic) server-rendered on demand using Node.js ``` I have tried: 1. deleting package-lock.json, node_modules, .next files/directories and running npm install 2. clearing the cache with --force 3. every sequence of the last 2 steps 4. rolling back the node version 5. rolling forward the node version 6. every sequence of 1, 2, 4, and 5 7. moving the root folder to another directory 8. every sequence of 1, 2, 4, and 5 again 10. crying I updated my node version 2 nights ago The error first started appearing last night newly installed packages: 1. express newly deleted packages: 1. express I was working on an api route when it started appearing. My environment variables are current across all 3 systems I work on. Vercel shows no errors when building for deploy. My questions are: 1. How do I know where to look? I see where the errors are occurring, but I don't know what is causing the error. 2. Should I hang up the gloves and become a frontiersman? **EDIT** I rolled back my project(github) to identify the breaking commit. I learned there were several working commits after I updated Node and installed/removed express. I removed all of the files modified in the commit and ran npm run dev one by one. No change. Not sure what this means, but I hope it helps! I will keep troubleshooting. Oh, and in regard to the npm build error, I checked to make sure the BUILD_ID file was indeed there with a string inside.
The `readr::read_csv()` function has an argument called `col_select` that allows you to specify which columns to read using the same language as `dplyr::select()`. So in practice, this looks like: df1 <- readr::read_csv( file = "sample-data.csv", col_names = header, col_select = c(D, B) ) Which then gives the desired output: # A tibble: 3 Γ— 2 D B <dbl> <dbl> 1 4 2 2 8 6 3 12 10 You can also call `attr(df1, "spec")` which confirms that columns `A` and `C` were skipped when reading the file.
Looking at the error message, it seems that you didn't correctly set up Authorized redirect URIs value in Google Cloud console. See [data-login_url](https://developers.google.com/identity/gsi/web/reference/html-reference#data-login_uri) and step 5 on this [page](https://developers.google.com/identity/gsi/web/guides/get-google-api-clientid#get_your_google_api_client_id). Thanks, Herman
The [ECS Task Execution IAM role][1] is used by the ECS service itself, for access to things like your ECR repository. This is the role used by ECS to access the other AWS services it needs to actually run your task. The optional [ECS Task IAM role][2] is provided to your running ECS task containers to provide the software running in your containers access to AWS resources. You need to provide your ECS task with a Task IAM role with the appropriate SES permissions. The AWS SDK you are using in your code will automatically pick up the ECS Task IAM role and use it. [1]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html [2]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
null
The lifecycle callback order of my three activities is as follows: ThemeSettingActivity::onActivityPaused ThemeSettingActivity::onActivityStopped ThemeSettingActivity::onActivitySaveInstanceState ThemeSettingActivity::onActivityDestroyed ThemeSettingActivity::onActivityCreated ThemeSettingActivity::onActivityStarted ThemeSettingActivity::onActivityResumed SettingActivity::onActivityDestroyed SettingActivity::onActivityCreated SettingActivity::onActivityStarted SettingActivity::onActivityResumed SettingActivity::onActivityPaused SettingActivity::onActivityStopped SettingActivity::onActivitySaveInstanceState MainActivity::onActivityDestroyed MainActivity::onActivityCreated MainActivity::onActivityStarted MainActivity::onActivityResumed MainActivity::onActivityPaused MainActivity::onActivityStopped MainActivity::onActivitySaveInstanceState
```r library(dplyr) data %>% arrange(month) %>% mutate(diff = share[pmatch(month + 1, month)] - share, year2 = if_else(month == 1, year - 1, year), .by = year) %>% mutate(diff2 = coalesce(diff, share[pmatch(month - 11, month)] - share), .by = year2) %>% select(-year2) ``` ##### Output ``` month year share diff diff2 1 1 2000 0.2 0.4 0.4 2 1 2000 0.4 0.3 0.3 3 1 2015 0.1 0.2 0.2 4 2 2000 0.6 -0.4 -0.4 5 2 2000 0.7 0.1 0.1 6 2 2015 0.3 NA NA 7 3 2000 0.2 0.3 0.3 8 3 2000 0.8 -0.7 -0.7 9 4 2000 0.5 -0.3 -0.3 10 4 2000 0.1 NA NA 11 5 2000 0.2 NA NA 12 6 2001 0.4 NA NA 13 6 2001 0.1 NA NA 14 7 2014 0.6 -0.3 -0.3 15 7 2014 0.7 NA NA 16 8 2014 0.3 -0.1 -0.1 17 9 2014 0.2 NA NA 18 9 2014 0.8 NA NA 19 10 2000 0.5 -0.3 -0.3 20 10 2000 0.1 NA NA 21 11 2000 0.2 NA NA 22 11 2001 0.4 0.2 0.2 23 11 2001 0.1 0.6 0.6 24 12 2001 0.6 NA NA 25 12 2001 0.7 NA NA 26 12 2014 0.3 NA -0.2 ```
I keep receiving errors: > The requested operation is not available. If you continue to experience problems, please contact your administrator I was provided following details 1. SF host: https://example.com/learning 2. Client id and secret 3. Api: /example/odatav4/v1/$metadata I want to retrieve data from given api, but I think I need an access token. Based on [this solution](https://community.sap.com/t5/technology-q-a/get-oauth2-access-token-via-javascript/qaq-p/12817338), I write this C# code: ``` static async Task<string> GetAccessToken() { var TokenUrl = "https://example.com/learning/oauth/token"; // Token URL var ClientID = "..."; // Client ID var ClientSecret = "..."; // Client Secret var Encoded = Convert.ToBase64String(Encoding.ASCII.GetBytes($"{ClientID}:{ClientSecret}")); using (var client = new HttpClient()) { client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", Encoded); var content = new StringContent("grant_type=client_credentials", Encoding.UTF8, "application/x-www-form-urlencoded"); var response = await client.PostAsync(TokenUrl, content); if (!response.IsSuccessStatusCode) { string errorContent = await response.Content.ReadAsStringAsync(); Console.WriteLine(errorContent); throw new Exception($"HTTP error! status: {response.StatusCode}"); } var Response = await response.Content.ReadAsStringAsync(); return Response; } ```
how make a class with async aiohttp tread safe?
|python|tornado|aiohttp|
In this example, they mean the exact same thing. However, there are a few advantages to using the trailing return type form consistently (Phil Nash calls these ["East End Functions"](https://levelofindirection.com/blog/east-end-functions.html), since the return type is on the east end). 1. Using parameters. Obviously when using parameters to determine the return type, you _must_ use a trailing return type. template <typename T> auto print(T const& t) -> decltype(std::cout << t) { return std::cout << t; } 2. Name lookup. In a trailing return type, name lookup includes the class scope for member function definitions. This means you don't have to retype the class if you want to return a nested class: Type C::foo() { ... } // error: don't know what Type is C::Type C::foo() { ... } // ok auto C::foo() -> Type { ... } // ok 3. Likewise, for defining member functions where the class name for some reason must be disambiguated to be in the global namespace and the return type is a class: D ::C::foo() { ... } // error, parsed as D::C::foo() { ... } auto ::C::foo() -> D { ... } // ok 4. If you are returning types like pointers to functions or pointers to arrays, it's actually possible to both write and understand the declaration using the trailing type syntax - whereas getting the syntax correct in the "normal" syntax is a challenge in its own right, understanding it even more so. These are both declarations of a function that takes no parameters and returns a pointer to function that takes an `int` and returns an `int`: ```cpp int (*get_function())(int); // traditional auto get_function() -> int (*)(int); // trailing return type ``` 5. A more reasonable ordering of information. Let's say you want to write a function `to_string` that takes an `int` and returns an `string`. That's a pretty sensible way of phrasing that. Function name, parameters, return type. You wouldn't say you want to write a `string`-returning function named `to_string` that takes an `int`. That's an awkward order of information. The name is the most important, followed by the parameters, followed by the return type. The trailing-return-type form allows you to order these pieces of information better. There are cases where trailing-return-type is mandatory, there are cases where it is helpful, and there are cases where it does the same thing. There are not cases where it is worse for reasons other than simply character count. Plus, mathemtically we're used to thinking of functions as `A -> B` and not so much `B(A)`, and so `auto(*)(A) -> B` as a function pointer taking an `A` and returning a `B` is a little closer to that view than `B(*)(A)`. <hr /> On the other hand, writing `auto main() -> int` looks ridiculous. But honestly, that's mostly because of unfamiliarity. There's nothing inherently ridiculous about it. If anything, it's a bit unfortunate that the language uses `auto` here as a way to declare a function - not because it's too long (i.e. some other languages use `fun` or even `fn`) - but because it's not really distinct from other uses of `auto`. If it were `func` instead, I think it would have been better (although it would not make any sense to change now). <hr /> Ultimately, this is purely opinion based. Just write code that works.
I've been trying to find out how to copy a specific sheet in an excel file. So far I haven't found the correct method using Pandas. All I can find is how to copy the sheet from one file to another. My goal is to make a copy of "Sheet1" and call it "Sheet2". So the result would be an excel file with sheets "Sheet1" and "Sheet2". "Sheet2" being an exact copy of "Sheet1". Thanks in advance!
|python|pandas|excel|duplicates|copy|
Using `Set(*<VariableName>*, Blank());` cleared the value of the variable in my case. Use this command before trying to access or check the value of the variable.
I think this question is not made for a pyspark solution as you have not a partitioned folder structure rather it is more a general python question. You are looking for the max in the folders on a databricks filesystem ```dbfs```. As the first answer says you can use the hadoop filesystem api. Another way that I prefer more as it is more flexible in handling local paths and dbfs paths is to change the dbfs path to local path. Imagine you are working locally and in databricks. The paths on how you gonna access the data are different. But you want to test locally with your IDE first the above solution might not work until you gonna setup hive on your local machine. What you can do is to replace first the dbfs path into a normal file system path and working with the python libraries. ``` from pathlib import Path from typing import Union import os def get_local_path(path: Union[str, Path, os.path]) -> str: """ Transforms a potential dbfs path to a path accessible by standard file system operations :param path: Path to transform to local path :return: The local path """ return str(path).replace("dbfs:", "/dbfs") dbfs_path = "dbfs:/path_to/xyz" local_path = get_local_path(dbfs_path) ``` The above function replaces i.e. ```dbfs:/path_to/xyz``` by ```/dbfs/path_to/xyz```. From now on you can use the python ```os``` or ```pathlib``` functionalities. ``` def find_max_in_folders(path) -> str: """ Finds the maximum folder number. :param path: The path of the folders. :return: The max number in the folders. """ dirs = [int(dir_name) for dir_name in os.listdir(path) if os.path.is_dir(os.path.join(path, dir_name))] the_max = max(dirs) return the_max max_month = find_max_in_folders(local_path) month_path = os.path.join(local_path, f"month/{max_month}") max_day = find_max_in_folders(month_path) print(f"max_month: {max_month}" print(f"max_day: {max_day}" ``` Also if the data would be partitioned correctly like ```month=...``` and ```day=...```then it is also not a good idea to load the data by loading from the root path. If forces spark to read and scan first all underlying data. If your folders are getting bigger and the data under the hood is also big then this would be the worst idea dealing with the above problem. In order someone jumps over this answer and looking for a partition pruning solution it is better to go and scan the filesystem on your own (like the above answer) if you are not working with watermarks and you are only looking for the latest data dumps.
I have two different ranges in excel and I want to save the ranges in one image. I use python. I tried to use union but It throws the exception "unknown.Union". Thank you for your help. <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-html --> excel = win32com.client.Dispatch('Excel.Application') excel.visible = False wb = excel.Workbooks.Open(self.source_path) ws = wb.Worksheets[0] #This is what I tried #ws.Union(ws.Range("B843:CZ847"), ws.Range("B4:CZ5")).Copy() img = ImageGrab.grabclipboard() imgFile = os.path.join(self.excel_target,self.excel_name) img.save(imgFile) <!-- end snippet -->
|token|odata|sap-successfactors|
Im trying the minimaxsum hackerank challenge with c# and keep getting error when I run my code. Currently I cant see where my mistake is and would understand why my code in particular has failed. Assistance will be appreciated. Thank you Here is my current code... ``` // Sort the array Array.Sort(arr); // Calculate the sum of all elements long totalSum = arr.Sum(x => (long)x); // Calculate the minimum sum by excluding the largest element long minSum = totalSum - arr[arr.Length - 1]; // Calculate the maximum sum by excluding the smallest element long maxSum = totalSum - arr[0]; // Print the minimum and maximum sums Console.WriteLine($"{minSum} {maxSum}"); ``` ...and below is error as received from hackerank compiler ``` /tmp/submission/20240326/12/44/hackerrank-a4f7f4834397aa8a2c1c902152f89823/code/Solution.cs(28,20): error CS1503: Argument 1: cannot convert from 'System.Collections.Generic.List<int>' to 'System.Array' [/tmp/submission/20240326/12/44/hackerrank-a4f7f4834397aa8a2c1c902152f89823/code/Solution.csproj] /tmp/submission/20240326/12/44/hackerrank-a4f7f4834397aa8a2c1c902152f89823/code/Solution.cs(34,42): error CS1061: 'List<int>' does not contain a definition for 'Length' and no accessible extension method 'Length' accepting a first argument of type 'List<int>' could be found (are you missing a using directive or an assembly reference?) [/tmp/submission/20240326/12/44/hackerrank-a4f7f4834397aa8a2c1c902152f89823/code/Solution.csproj] /tmp/submission/20240326/12/44/hackerrank-a4f7f4834397aa8a2c1c902152f89823/code/Solution.cs(50,25): warning CS8602: Dereference of a possibly null reference. [/tmp/submission/20240326/12/44/hackerrank-a4f7f4834397aa8a2c1c902152f89823/code/Solution.csproj] ```
Mini Max Sum with C#
|c#|
null
import python modules on matlab using pyrunfile?
In .NET 7+, you can use the new [`INumber`][1] etc interfaces. This allows you to cast to/from byte arrays, and to/from other integer or number types. ```cs public class Header<T> where T : struct, IBinaryInteger<T> { public Header(T off, T len) { Offset = off; LenData = len; } public T Offset { get; set; } public T LenData { get; set; } public byte[] ToByteArray() { var size = Offset.GetByteCount(); byte[] bTemp = new byte[2 * size]; Offset.WriteLittleEndian(bTemp); LenData.WriteLittleEndian(bTemp, size); return bTemp; } } public class Telegram<T> where T : struct, IBinaryInteger<T> { public Header<T> Header { get; set; } public byte[] Data { get; set; } public Telegram(T offset, byte[] bData) { Header = new Header<T>(offset, T.CreateTruncating(bData.Length)) Data = bData; } } ``` [1]: https://learn.microsoft.com/en-us/dotnet/api/system.numerics.inumberbase-1.createtruncating?view=net-8.0#system-numerics-inumberbase-1-createtruncating-1(-0)
Most likely you have bad data in your database; possibly a row where ID equals `new`, if the error message can be trusted. The value can't be converted to an ObjectID and an error is thrown. Search the database and delete or fix invalid ID values.
I have this markup: ``` <div className="wrapper w-full h-full flex flex-col justify-between my-7"> {/*first*/} <div className="w-full h-full min-h-0 outline-dotted outline-green-300 outline-2 p-7 bg-black overflow-y-auto whitespace-break-spaces text-nowrap"> {data.text} </div> {/*second*/} <div className="w-full h-16 mt-5 outline-dotted outline-green-300 outline-2 p-3 bg-black flex justify-between items-center"> <span className="text-lg font-bold">From: <span className="bg-gray-900 p-1">01/01/2024</span></span> <span className="text-lg font-bold">Due: <span className="bg-gray-900 p-1">05/02/2024</span></span> </div> </div> ``` It's parent: ``` <main className="flex flex-col w-full h-full"> <div className="flex justify-between align-middle items-center h-header w-full border-b border-gray-500"> <span className="font-bold text-2xl">Display</span> <CreateEditModalCallerButton buttonClassName="flex items-center justify-center p-2 hover:bg-opacity-15 hover:bg-black" mode="CREATE"> <Image className="mr-2 w-6 h-6" src="/assets/plus-solid.svg" alt="pic" width="16" height="16"/> <span className="min-md:text-sm max-md:text-base">Add</span> </CreateEditModalCallerButton> </div> <div className="grid grid-rows-1 grid-cols-2 w-full h-full divide-x-2 divide-gray-600 px-7"> <div className="flex flex-col w-full h-full max-sm:col-span-2 pr-7"> <HometaskList initialData={initialHometasks}/> </div> <div className="flex flex-col w-full h-full pl-7 max-sm:hidden"> {/*Problem is located here*/} <HometaskInfo/> </div> </div> </main> ``` And root layout: ``` <html lang="en"> <body className={`${font.className} w-screen h-screen flex max-md:flex-col`}> <StoreProvider> <Nav /> <div className="root_content w-full h-screen"> {children} </div> </StoreProvider> </body> </html> ``` There are two `div`s in a flex wrapper. First one is supposed to contain a long multiline text, so I use `overflow-y-auto` to add vertical scroll when needed. This block should take all space left after placing second `div`. The problem is that when resizing the window, first block begins to shrink freely only after bottom edge of the window meets it's border. Also, it shrinks a little bit if it has free space between bottom border and text including it's padding. Perhaps you can better understand my problem by watching [this GIF](https://i.imgur.com/iT6Hg3i.gif). I tried using `flex-shrink` and `flex-grow` but neither solved this problem. I also tried doing like [in the answer for this question](https://stackoverflow.com/questions/15955178/how-to-start-shrinking-sibling-div-when-there-is-not-enough-space), but it stops shrinking at all. The thing I want to achieve is to make first (large) div shrink and become scrollable when there is not enough space for **other children** left inside the flexbox. How can I do this? If possible, I would like to do it with Tailwind, but using standard CSS is not a problem. I would also like to know why this is happening even though first `div` has `h-full`. Any advice is appreciated.
this is quite old question however i will answer it as i just faced the same scenario. p-upload allow you to set choose label like `chooseLabel="Choose" ` then you can apply your localization in this way `chooseLabel="{{ 'Choose' | localize }}"` as shown in official documentation https://primeng.org/fileupload
You can inspect the `FruitColors` object. Note that if you do ***not*** assign names for the enum values, the generated code will be different and a simple key/value based mapping will lead to wrong results. e.g. export enum FruitColors { "Red", "Yellow", } Object.values(FruitColors); // ["Red", "Yellow", 0, 1] Because the generated code is along these lines: var FruitColors; (function (FruitColors) { FruitColors[FruitColors["Red"] = 0] = "Red"; FruitColors[FruitColors["Yellow"] = 1] = "Yellow"; })(FruitColors = exports.FruitColors || (exports.FruitColors = {})); You could then just filter the results by `typeof value == "string"`.
```markdown 1. [ ] If `TestRestTemplate` can be used with spring-data-rest, please provide examples of using it with response bodies more complex than `String` or `Map<?, ?>`. e.g., a `Person` class. 2. [ ] Please also include a `postForEntity` <s>and/or `putForEntity`</s>(`putForEntity` doesn't exist) example, because I see a lot of people struggling to make requests that send a complex class, e.g., `Person`, in the request body. 3. [ ] If `TestRestTemplate` is the *wrong* tool for the job, please recommend the right tool or a better tool. This might already be documented, but I've been unable to find it. ``` Just replace the bullet list marker (`-`) with an ordered list marker (`1.`). Note that the [spec](https://github.github.com/gfm/#task-list-items-extension-) specifically states: > A task list item is a list item where the first block in it is a paragraph which begins with a task list item marker and at least one whitespace character before any other content. And a list item is defined as: > A list is an ordered list if its constituent list items begin with ordered list markers, and a bullet list if its constituent list items begin with bullet list markers. So, if you put those two together, then you take the appropriate list marker and follow it by the task list item marker. I realize that the spec does not show any examples of ordered task lists, but a quick test in a GitHub comment shows that it works. By the way, by using both list marker types together you were actually nesting one list item inside another list item. You had a outer ordered list item, with a unordered list item inside it and that unordered list item then contained the checkbox and content. And that explains the extra whitespace you were seeing.