instruction stringlengths 0 30k ⌀ |
|---|
Protobuf library version: 3.21.12
This is my proto file:
```
syntax = "proto2";
message MyBidRequest {
optional string req_id = 1;
optional int32 tmax = 2;
}
```
```
// Read JSON string from a file or any source
std::string json_string = R"(
{
"req_id": 1,
"tmax": 777
}
)";
// Create an instance of the protobuf message
MyBidRequest my_bid_request;
// Parse the JSON string into the protobuf message
google::protobuf::util::Status myStatus = google::protobuf::util::JsonStringToMessage(json_string, &my_bid_request);
std::cout << "Jsonstring to Message Status string:" << myStatus.ToString() << std::endl
// Access the fields of the protobuf message
std::cout << "1:Field req_id: " << my_bid_request.req_id() << std::endl;
std::cout << "1:Field tmax: " << my_bid_request.tmax() << std::endl;
```
Output on running the above code:
```
Jsonstring to Message Status string:INVALID_ARGUMENT:(req_id): invalid value 1 for type TYPE_STRING
1:Field req_id:
1:Field tmax: 0
```
Even when a single field's data type is mismatched, complete json parsing failure happens.What I am trying to achieve is that the tmax field should get consumed even when id is not a string, is there a way I can achieve this?
The JSON can have ~100 fields. Loss of the whole JSON is not what we want, since it will lead to a lost opportunity. The single mismatched field, which can be an insignificant one, can be discarded, and we could use the rest 99. The discarded fields can be given some default value and used depending on use case of each field. In case the field is a significant one, it will be in the hands of application to discard the whole JSON. We want the control to discard the whole JSON on the application side, instead of the library |
Put a for loop iteration in a Bootstrap group list button click |
|javascript|html| |
Have you tried something like this:
request.AddParameter(new JsonParameter("data", <jsonObjectOrString>))
In addition to your already existing request.AddFile of course. |
[![picture of the white space][1]][1]
I want to make the white space around flex items smaller. This is essentially a parent flex box with child flex boxes.
heres my html:
```
<div class="parent-box">
<div class="contain">
<div class="modal">
<h2>Plans For The Site</h2>
<p><span class="date">~12-03-24 00:57~</span><br><span class=body>so, in order, i want to get this little blog thing working and looking the way i want, create the gifs and pictures for the background and buttons, create a font and add it, add some little music effects to everything, add some content to the different pages, add a little gir cursor, and then i think ill put the link up on my socials. once i get to the point of putting it up on my socials, ill subscirbe and get a domain name that i like... before that im just gonna keep working on the free subscription unless i run out of space. eta with the way im working looks to be about a week- 3 weeks, esp with the clothing im working on.</span>
</div>
</div>
<div class="contain">
<div class="modal">
<h2>Plans For The Site</h2>
<p><span class="date">~11-03-24 01:38~</span><br><span class=body>Just got the website front page done superficially, need to create the gifs and pictures and background, and need to figure out how to format this blog better than it is atm... make take a day or so... going to bed right now though</span></p>
</div>
</div>
</div>
```
and heres the css:
```
.contain {
width: 100vw;
height: 100vh;
display: flex;
justify-content: center;
align-items: center;
}
.modal {
width: 50vw;
height: 50%;
margin:30px;
padding: 1rem;
background-color: aqua;
border: 1px solid black;
border-top: 15px solid black;
color: black;
overflow: scroll;
overflow-x: hidden;
display: flex;
flex-direction: column;
text-align: left;
}
.parent-box {
display: flex;
width: 70%;
height: 50vh;
margin-left:auto;
margin-right:auto;
margin-bottom:10px;
margin-top:10px;
overflow: scroll;
overflow-x: hidden;
justify-content: space-between;
align-items: center;
flex-direction: column;
font-size: 30px;
text-align: center;
background-color: #7d7d7d;
color: #000000;
font-weight: bold;
}
```
Ive tried using margins in all three css styles, but it seems to do nothing. Im just trying to make the gray space at the top smaller and responsive.
[1]: https://i.stack.imgur.com/OsWrx.png |
trying to make smaller bars in a flexbox |
|html|css|flexbox| |
null |
N.B. This function will only work if you are using an AutoFilter. <!-- language-all:lang-vb -->
Function GetFirstItem(ws As Worksheet) AS Long
GetFirstItem = -1 'Return -1 if there are no items
If Not ws.AutoFilterMode Then Exit Function 'Return -1 if there is no AutoFilter
Dim lHeader AS Long, lFirstItem AS Long, rArea As Range
lHeader = ws.AutoFilter.Range.Cells(1, 1).Row 'Store the Header Row
lFirstItem = -1 ' Return -1 if there are no items
'Loop through the visible Contiguous Areas - i.e. the rows you have filtered for
For Each rArea In ws.AutoFilter.Range.SpecialCells(xlCellTypeVisible).Areas
If rArea.Cells(1, 1).Row > lHeader Then 'If this Area does not contain the Header
If lFirstItem < 1 Or rArea.Cells(1, 1).Row < lFirstItem Then
'Store the Item Row if it is the new Lowest
lFirstItem = rArea.Cells(1, 1).Row
End If
ElseIf rArea.Rows.Count > 1 Then 'Area contains Header and 1 or more Items
If lFirstItem < 1 Or rArea.Cells(2, 1).Row < lFirstItem Then
'Store the Item Row if it is the new Lowest (Ignore the Header row)
lFirstItem = rArea.Cells(2, 1).Row
End If
End If
Next rArea
End Function |
This answer might help - it got rid of a bunch of 'changes' for me, with a checkout that compared identical to HEAD except that some of the changes were the (normalised) LFS pointers which should have been there, instead of the binary files which actually lived on the branch.
https://stackoverflow.com/a/14515846/10348047
## create a stand-alone, tagged, empty commit
true | git mktree | xargs git commit-tree | xargs git tag empty
## clear the working copy
git checkout empty
## go back to where you were before
git checkout master (or whatever)
|
You can assign `color` to a vector created by `ifelse` inside `aes`. This does not need to be a vector of color names, rather just a vector of the labels you wish applied to the legend, for example `color = ifelse(depth1 > 5, 'High', 'Low')`
It's not clear from the question whether you want `df1` and `df2` to have the same colors, differing only by depth. If so, then you could do:
``` r
ggplot() +
geom_point(aes(x = var1, y = var2, color = ifelse(depth1 > 5, 'High', 'Low')),
data = df1) +
geom_point(aes(x = var3, y = var4, color = ifelse(depth2 > 5, 'High', 'Low')),
data = df2) +
scale_color_manual('Depth', values = c(Low = 'black', High = 'red'))
```
[![enter image description here][1]][1]
If you want a different color scheme for each data set, then you just need to ensure that the `ifelse` produces unique results for each series:
``` r
ggplot() +
geom_point(shape = 21, size = 4, aes(x = var1, y = var2,
fill = ifelse(depth1 > 5, 'High (df1)', 'Low (df1)')),
data = df1) +
geom_point(shape = 21, size = 4, aes(x = var3, y = var4,
fill = ifelse(depth2 > 5, 'High (df2)', 'Low (df2)')),
data = df2) +
scale_fill_manual('Depth', values = c(`Low (df1)` = 'red3',
`High (df1)` = 'pink',
`Low (df2)` = 'navy',
`High (df2)` = 'cornflowerblue')) +
theme_classic(base_size = 16)
```
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/dKS05.jpg
[2]: https://i.stack.imgur.com/jadh1.jpg |
null |
SmsManager smsManager = SmsManager.getDefault();
smsManager.sendTextMessage(phoneNo;9677498416, null, sms, null, null); |
I'm using header on php when I want to change after function. Now I want to use on jQuery.
if ($('form #response').hasClass('success')) {
setTimeout("$('form #response').fadeOut('fast')", 5000);
return window.location = index.php;
}
This line does not work - I want to change page when `.hasClass('success')`
return window.location = index.php;
I tried without return again did not work. |
null |
Mysql workbench returns this error:
```
Incompatible/nonstandard server version or connection protocol detected (11.3.2).
A connection to this database can be established but some MySQL Workbench features may not work properly since the database is not fully compatible with the supported versions of MySQL.
```
I got the warning earlier, but the instrument still worked. Now, after an operating system upgrade on MAC, it no longer works and returns:
```
Tables could not be fetched
Views could not be fetched ...
```
The MAC OS now is Sonoma 14.3.1.
mariadb version is: from 11.3.2-MariaDB, client 15.2 for osx10.19 (x86_64) using EditLine wrapper
Mysql workbench is 8.0.36
A work around and installation of an alternative tool is also fine. |
I have an Asyncio program that I want to have some loop functions be "safe" from cancellation and others to be cancelled if an exception is raised (i.e. Tasks 0-2 need to be "safe" and Tasks 3-5 need to be cancellable.
I read about how `TaskGroup`s in Python 3.11 can allow for the latter of these two points.
So I tried this while `shield`ing the "safe" loops (using a simple decorator) to make sure they weren't cancelled:
```python
_task_list: List[asyncio.Task] = []
# handle exceptions
try:
async with self.task_group_safe as tg:
for task in [
self.task0(),
self.task1(),
self.task2(),
self.task3(),
self.task4(),
self.task5(),
]:
t = tg.create_task(task)
_task_list.append(t)
except* AbortException as err:
print(f"{err=}")
await asyncio.sleep(0)
```
NOTE: `task5` is just a `while` loop to check if a flag has been set, and if so an exception is raised.
However, when I raise the exception, the `TaskGroup` outputs that all tasks have been cancelled, even though the first 3 are `shield`ed and still appear to be running:
```python
for i, task in enumerate(_task_list):
print(f"Task{i}: done={task.done()}, cancelled={task.cancelled()}")
```
```
Task0: done=True, cancelled=True
Task1: done=True, cancelled=True
Task2: done=True, cancelled=True
Task3: done=True, cancelled=True
Task4: done=True, cancelled=True
Task5: done=True, cancelled=False
```
So I had the idea to see if I could separate out the tasks into two `TaskGroup`s and remove the need for `shield`ing the first 3 tasks, such as:
```python
_task_list: List[asyncio.Task] = []
tg1: List[Coroutine[Any, Any, Any]] = [
self.task1(),
self.task2(),
self.task3(),
]
tg2: List[Coroutine[Any, Any, Any]] = [
self.task4(),
self.task5(),
self.task6(),
]
async def create_task_group(
task_group: asyncio.TaskGroup, tasks: List[Coroutine[Any, Any, Any]]
) -> None:
# handle exceptions
try:
async with task_group as tg:
for task in tasks:
t = tg.create_task(task)
_task_list.append(t)
except* AbortException as err:
print(f"{err=}")
await asyncio.sleep(0)
await create_task_group(self.task_group_1, tg1)
await create_task_group(self.task_group_2, tg2)
for i, task in enumerate(_task_list):
print(f"Task{i}: done={task.done()}, cancelled={task.cancelled()}")
```
Obviously, in this case the second `TaskGroup` never gets created. As the program gets stuck on the `await create_task_group(self.task_group_1, tg1)`.
I also tried:
```python
async def create_task_groups(
task_group: asyncio.TaskGroup,
tasks: List[Coroutine[Any, Any, Any]],
task_group2: asyncio.TaskGroup,
tasks2: List[Coroutine[Any, Any, Any]],
) -> None:
# handle exceptions
try:
async with task_group as tg, task_group2 as tg2:
for task in tasks:
t = tg.create_task(task)
_task_list.append(t)
for task in tasks2:
t2 = tg2.create_task(task)
_task_list.append(t2)
except* AbortException as err:
print(f"{err=}")
await asyncio.sleep(0)
await create_task_groups(self.task_group_1, tg1, self.task_group_2, tg2)
```
But this just had the same outcome as the first example I gave.
So...
***Question 1:***
Is it even possible to have two `TaskGroup`s running concurrently?
And
***Question 2:***
If it is possible to have two `TaskGroup`s running concurrently, is it possible to restart the second one after it has been cancelled?
Any help, or even other ways to write this code, would be greatly appreciated. :)
_
EDIT:
This is the decorator I use to shield the task functions:
```python
def _shielded(func: _AsyncFuncType) -> _AsyncFuncType:
"""
Makes so an awaitable method is always shielded from cancellation
"""
async def _shield(*args, **kwargs):
return await asyncio.shield(func(*args, **kwargs))
return _shield
```
`_AsyncFuncType` is just an alias for `Callable[..., Coroutine[Any, Any, Any]]` |
I want to decide a design scenario on data storing using corda states. I like to know the best way to connect stateA with stateB where stateB can be linked with 3 different types.
**Option 1:**
Say, I have an **InsuranceServiceState** having
insuranceType: String // values will be "InsuranceTypeA", "InsuranceTypeB"
insuranceTypeData: AbstractInsuranceTypes // eg: Inherited by InsuranceTypeAClass
And we have `InsuranceTypeA`, `InsuranceTypeB` child classes inherited from it. So that according to the user input in **InsuranceServiceState**, any of these subclasses can be inserted to insuranceTypeData field.
Finally once **`InsuranceAgreementState`** is created, it will be referring to the StaticPointer of InsuranceServiceState to identify the service used.
**Option 2:**
No `insuranceTypeData: AbstractInsuranceTypes` field inside InsuranceServiceState, instead, two new corda states for InsuranceTypeA, InsuranceTypeB where it has InsuranceServiceState linearId for linking.
Now those InsuranceTypeA or InsuranceTypeB values can be updated without being updating the InsuranceServiceState.
Though in order to know the type used at the time of creation of InsuranceAgreementState, we need to add a
StaticPointer<InsuranceTypeA>
inside InsuranceAgreementState.
Which one is the best way to do it? What criteria I should look at?
|
I see that MS Accessibility Insights claims to cover WCAG 2.2, which it does via manual checks. However, it does not run the automatic check for target size that was added as part of axe-core 4.8 in Sept 2023. My question is when will Accessibility Insights axe-core version (currently 4.7.2) next be updated to include target size among the automated tests, as per v4.8.0 onwards. More generally, what is the normal lag between a new version of axe-core being released, and Accessibility Insights catching up?
Thanks |
What is the timeline for updating MS Accessibility Insights to use axe-core 4.8.n? |
|accessibility-insights| |
null |
I tried to fetch details of resource owners created an year ago using powershell script but iam getting errors. Is it possible to create one?
This is the code I used:
```powershell
foreach ($resource in $resources) {
$createdBy = $resource.Properties.createdBy
if ($createdBy -and $createdBy.userPrincipalName) {
$ownerDetails = Get-AzADUser -UserPrincipalName $createdBy.userPrincipalName
Write-Host "Resource: $($resource.Name), Created by: $($ownerDetails.DisplayName) - $($ownerDetails.UserPrincipalName)"
}
}
``` |
null |
I want to update user meta after the user submits a form. `function updateUserType(){update_user_meta` works as expected alone, however it does not work when the form is called with `do_shortcode`.
echo do_shortcode('[contact-form-7 id="031c4e8" title="Dosya"]');
add_action( 'wpcf7_before_send_mail', 'updateUserType');
function updateUserType() {
$user_idd = get_current_user_id();
update_user_meta( $user_idd, '_user_type', 'general' ) ;} |
Custom 2D Convolution not sharpening Image |
|python|opencv|image-processing| |
When you want to get `data-something` in JS both work or when the "data-something" is added in JS with `dataset.something` and want to get it.
But performance getAttribute is much better than dataset (test performance reference)
getAttribute x 122,465 ops/sec ±4.62% (60 runs sampled)
dataset x 922 ops/sec ±0.58% (62 runs sampled) |
when using es modules
you must load **`fs`** module to read and write file support.
```javascript
import * as fs from 'fs';
// Set internal fs instance
XLSX.set_fs(fs);
```
accordion to this issue : [issue][1]
```javascript
import * as XLSX from 'xlsx'
import * as fs from 'fs';
XLSX.set_fs(fs);
// -------------------------------------------
const data = [
{ name: "Alice", age: 25, gender: "F" },
{ name: "Bob", age: 30, gender: "M" },
{ name: "Charlie", age: 35, gender: "M" },
];
const wb = XLSX.utils.book_new();
const ws = XLSX.utils.json_to_sheet(data);
XLSX.utils.book_append_sheet(wb, ws, "Sheet1");
XLSX.writeFile(wb, "sample_file.xlsx");
```
[1]: https://github.com/SheetJS/sheetjs/issues/2634#issuecomment-1231412497 |
Not possible. `window_w32.cpp` always saves window positions to registry.
You'd need to change OpenCV source to add this feature. And you'd need to think about how to convey this bit of information, either before you create any windows or when you create each window. |
Depending on what exactly you mean by "the text", you probably want the [`get_body` method.](https://docs.python.org/3/library/email.message.html#email.message.EmailMessage.get_body) But you are thoroughly mangling the email before you get to that point. What you receive from the server isn't "HTML" and converting it to a string to then call `message_from_string` on it is roundabout and error-prone. What you get are bytes; use the `message_from_bytes` method directly. (This avoids all kinds of problems when the bytes are not UTF-8; the `message_from_string` method only really made sense back in Python 2, which didn't have explicit `bytes`.)
```
from email.policy import default
...
_, response = imap.uid(
'fetch', e_id, '(RFC822)')
email_message = email.message_from_bytes(
response[0][1],
policy=default)
body = email_message.get_body(
'html', 'text').get_payload(
decode=True)
```
The use of a `policy` selects the (no longer very) new `EmailMessage`; you need Python 3.3+ for this to be available. The older legacy `email.Message` class did not have this method, but should be avoided in new code for many other reasons as well.
This could fail for multipart messages with nontrivial nested structures; the `get_body` method without arguments can return a `multipart/alternative` message part and then you have to take it from there. You haven't specified what your messages are expected to look like so I won't delve further into that. |
I have everything correctly setup in my program.cs on backend correct url names etc but I cannot get both clients to navigate to the page when testing locally with two different browsers using websockets over http/ws
MatchMakingController.cs
```
[HttpPost("joinQueue")]
public async Task<IActionResult> JoinQueue([FromBody] JoinQueueModel model)
{
Console.WriteLine("1: JoinQueue called");
int playerCharacterId = model.PlayerCharacterId;
// Check if player is already in queue to prevent duplicates
if (!matchmakingQueue.Contains(playerCharacterId))
{
matchmakingQueue.Add(playerCharacterId);
}
// Try to match players if there are at least 2 in the queue
if (matchmakingQueue.Count >= 2)
{
Console.WriteLine("2: Matchmaking logic processing");
int player1Id = matchmakingQueue[0];
int player2Id = matchmakingQueue[1];
matchmakingQueue.RemoveRange(0, 2); // Remove matched players from the queue
var match = new Match
{
Player1ID = player1Id,
Player2ID = player2Id,
Status = "in progress"
};
_context.Matches.Add(match);
await _context.SaveChangesAsync();
// Inside your JoinQueue method, after saving the match
await _hubContext.Clients.All.SendAsync("MatchFound", new { matchId = match.MatchID, player1Id, player2Id });
return Ok(new { matchId = match.MatchID, player1Id, player2Id });
}
return Ok(new { message = "Waiting for an opponent..." });
}
```
Frontend signalR
```
late HubConnection _hubConnection;
final String serverUrl = "http://localhost:5111/battleHub";
final void Function(int matchId, int player1Id, int player2Id) onMatchFound;
SignalRService({required this.onMatchFound}) {
// Now a named parameter
_hubConnection = HubConnectionBuilder().withUrl(serverUrl).build();
initConnection();
}
Future<void> initConnection() async {
if (_hubConnection.state == HubConnectionState.Disconnected) {
try {
await _hubConnection.start();
print("Connection Started");
_setupListeners();
} catch (e) {
print("Error starting connection: ${e.toString()}");
}
}
}
void _setupListeners() {
_hubConnection.on('MatchFound', (arguments) {
print('MatchFound event received');
if (arguments != null && arguments.length >= 3) {
final int matchId = arguments[0] as int;
final int player1Id = arguments[1] as int;
final int player2Id = arguments[2] as int;
onMatchFound(matchId, player1Id, player2Id);
}
});
}
```
Api
```
Future<MatchmakingResponse?> joinMatchmakingQueue(
int playerCharacterId) async {
print('Joining queue with playerCharacterId: $playerCharacterId');
final response = await http.post(
Uri.parse('http://localhost:5111/api/matchmaking/joinQueue'),
headers: <String, String>{
'Content-Type': 'application/json; charset=UTF-8',
},
body: jsonEncode({'playerCharacterId': playerCharacterId}),
);
print('Response body1: ${response.body}');
if (response.statusCode == 200) {
final data = jsonDecode(response.body);
// Adjusted to return a MatchmakingResponse object
return MatchmakingResponse.fromJson(data);
} else {
print(
'Failed to join matchmaking queue. Status code: ${response.statusCode}');
return null;
}
}
Future<MatchDetails> fetchMatchDetails(
int matchId, int requesterCharacterId) async {
final response = await http.get(
Uri.parse(
'http://localhost:5111/api/matchmaking/match/$matchId/details/$requesterCharacterId'),
headers: <String, String>{
'Content-Type': 'application/json; charset=UTF-8',
},
);
if (response.statusCode == 200) {
final data = jsonDecode(response.body);
return MatchDetails.fromJson(data);
} else {
throw Exception('Failed to load match details');
}
```
Lobby page with button to enter game
```
@override
void initState() {
super.initState();
_signalRService =
SignalRService(onMatchFound: (matchId, player1Id, player2Id) {
_navigateToBattleScreen(matchId, player1Id, player2Id);
});
}
@override
void dispose() {
_signalRService.closeConnection();
super.dispose();
}
void _fetchCharacterDetails() async {
try {
final fetchedCharacter = await ApiService()
.fetchPlayerCharacterDetails(widget.playerCharacterId);
setState(() {
_latestCharacterDetails =
fetchedCharacter; // Update state with the latest character details
});
} catch (e) {
print("Error fetching character details: $e");
}
}
void _navigateToBattleScreen(int matchId, int player1Id, int player2Id) {
print('Am i trying to navigate?');
int opponentCharacterId =
(widget.playerCharacterId == player1Id) ? player2Id : player1Id;
Navigator.push(
context,
MaterialPageRoute(
builder: (context) => BattlePvPScreen(
playerCharacter: widget.character,
matchId: matchId,
opponentCharacterId: opponentCharacterId,
)));
}
ElevatedButton.icon(
icon: Icon(Icons.people),
label: Text('Battle PvP'),
onPressed: () async {
final matchmakingResponse = await ApiService()
.joinMatchmakingQueue(
widget.character.playerCharacterId);
if (matchmakingResponse != null &&
matchmakingResponse.matchId != null) {
// Determine the opponent's character ID
int opponentCharacterId =
matchmakingResponse.player1Id ==
widget.character.playerCharacterId
? matchmakingResponse.player2Id!
: matchmakingResponse.player1Id!;
// Navigate to BattlePvPScreen
Navigator.push(
context,
MaterialPageRoute(
builder: (context) => BattlePvPScreen(
playerCharacter: widget.character,
matchId: matchmakingResponse.matchId!,
opponentCharacterId: opponentCharacterId,
),
),
);
} else {
// Handle waiting or error
print("Waiting for an opponent or an error occurred.");
}
},
```
I'm still new to development so please excuse me if I've made silly mistakes or missed anything I'm happy to post any additional information as necessary. One player is navigated for now but that's because of the nav push on the button not listening to the matchfound event correctly and navigating due to that from signal service.
DEBUG OUTPUT
Response Body: {"playerCharacterId":58,"playerId":27,"characterId":1,"health":105,"damage":25,"armour":10,"level":6,"experience":10,"wins":6,"losses":0,"player":null,"character":{"characterId":1,"name":"Mage","damage":20,"health":80,"level":1,"experience":0,"armour":5}}
Connection Started
Joining queue with playerCharacterId: 58
MatchFound event received
Response body1: {"matchId":65,"player1Id":61,"player2Id":58}
I tried to navigate both players based on player1id player2id and matchid to the game after listening to the matchfound event received |
null |
I have been working on backend for a Java webApp and have been having difficulty getting openjpa to create my database schema. I am using openjpa, with tomEE.
```
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd">
<persistence-unit name="webAppPU" transaction-type="JTA">
<provider>org.apache.openjpa.persistence.PersistenceProviderImpl</provider>
<jta-data-source>webApp</jta-data-source>
<class>com.webapp.model.RoleEntity</class>
<class>com.webapp.model.Shift</class>
<exclude-unlisted-classes>true</exclude-unlisted-classes>
<properties>
<property name="openjpa.jdbc.Schema" value="APP"/>
<property name="javax.persistence.schema-generation.database.action" value="drop-and-create"/>
<property name="javax.persistence.schema-generation.create-datbase-schemas" value="true"/>
<property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema(ForeignKeys=true)"/>
<property name="openjpa.jdbc.MappingDefaults" value="ForeignKeyDeleteAction=restrict,JoinForeignKeyDeleteAction=restrict"/>
</properties>
</persistence-unit>
</persistence>
```
I have tried many combinations of properties in the persistence.xml file. Originally I started with just schema-generation property and on deploy I expected to see a schema with my two classes / tables ( RoleEntity and Shift ). I have tried some different properties and have yet to find a config that creates the schema on deploy. |
openjpa not creating schema from persistence.xml |
|java|jpa|schema|apache-tomee|persistence.xml| |
null |
I was trying to generate a presigned URL for an S3 bucket in the `me-central-1` region using the boto3 library, as demonstrated below
```
client = boto3.client("s3", region_name="me-central-1")
return client.generate_presigned_url(
ClientMethod="put_object",
Params={"Bucket": bucket, "Key": path},
)
```
This generates a url with the following form: `https://bucket.s3.amazonaws.com/path?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=CREDENTIAL/DATE/me-central-1/s3/aws4_request&X-Amz-Date=DATETIME&X-Amz-Expires=1800&X-Amz-SignedHeaders=host&X-Amz-Signature=SIGNATURE`
However, trying to use this url results in an `IllegalLocationConstraintException` with the following message: `The me-central-1 location constraint is incompatible for the region specific endpoint this request was sent to.`
I did a bit of research into why this was happening and found that S3 endpoints going to regions other than us-east-1 needed to include the region like so: `https://bucket.s3.REGION.amazonaws.com/...`.
I then changed the boto3 client instantation to the version below:
```
client = boto3.client("s3", region_name="me-central-1", endpoint_url="https://s3.me-central-1.amazonaws.com")
```
The resulting url had the form: `https://s3.me-central-1.amazonaws.com/bucket/path?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=CREDENTIAL/DATE/me-central-1/s3/aws4_request&X-Amz-Date=DATETIME&X-Amz-Expires=1800&X-Amz-SignedHeaders=host&X-Amz-Signature=SIGNATURE`
This URL seems to work without returning an `IllegalLocationConstraintException`! However, I noticed this was now a path-style request instead of a virtual-hosted-style request (as defined [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html). The provided webpage also states path-style requests will be discontinued, so I will not be able to use the above code for a long-term solution.
My attempts to manually change the path-style request into a virtual-hosted-style request resulted in a `SignatureDoesNotMatch` error, with the message: `The request signature we calculated does not match the signature you provided. Check your key and signing method`.
My question would be - is there a way to make the boto3 client return a valid virtual-hosted-style presigned url? |
Wordpress update user meta when the user submits form (form called with do_shortcode) |
|php|wordpress|contact-form-7|usermetadata| |
|gcc|autotools|automake| |
Ubuntu 20.04
Python 3.8
I'm using a python file (not written by me) with a U-Net and custom Loss functions. The code was written for tensorflow==2.13.0, but my GPU cluster only has tensorflow==2.2.0 (or lower). The available code isn't compatible with this version.
Specifically the 'if' statement in update_state. Can somebody help me rewrite this? I'm not experienced with tf.
class Distance(tf.keras.metrics.Metric):
def __init__(self, name='DistanceMetric', distance='cm', sigma=2.5, data_size=None,
validation_size=None, points=None, point=None, percentile=None):
super(Distance, self).__init__(name=name)
self.counter = tf.Variable(initial_value=0, dtype=tf.int32)
self.distance = distance
self.sigma = sigma
self.percentile = percentile
if percentile is not None and point is not None:
assert (type(percentile) == float)
self.percentile_idx = tf.Variable(tf.cast(tf.round(percentile * validation_size), dtype=tf.int32))
else:
self.percentile_idx = None
self.point = point
self.points = points
self.cache = tf.Variable(initial_value=tf.zeros([validation_size, points]),
shape=[validation_size, points])
self.val_size = validation_size
def update_state(self, y_true, y_pred, sample_weight=None):
n, h, w, p = tf.shape(y_pred)[0], tf.shape(y_pred)[1], tf.shape(y_pred)[2], tf.shape(y_pred)[3]
y_true = normal_distribution(self.sigma, y_true[:, :, 0], y_true[:, :, 1], h=h, w=w, n=n, p=p)
if self.distance == 'cm':
x1, y1 = cm(y_true, h=h, w=w, n=n, p=p)
x2, y2 = cm(y_pred, h=h, w=w, n=n, p=p)
d = ((x1 - x2) ** 2 + (y1 - y2) ** 2) ** 0.5
d = d[:, :, 0]
elif self.distance == 'argmax':
d = (tf.cast(tf.reduce_sum(((argmax_2d(y_true) - argmax_2d(y_pred)) ** 2), axis=1),
dtype=tf.float32)) ** 0.5
temp = tf.minimum(self.counter + n, self.val_size)
if self.counter <= self.val_size:
self.cache[self.counter:temp, :].assign(d[0:(temp-self.counter), :])
self.counter.assign(self.counter + n)
def result(self):
if self.percentile_idx is not None:
temp = tf.sort(self.cache[:self.val_size, self.point], axis=0, direction='ASCENDING')
return temp[self.percentile_idx]
elif self.point is not None:
return tf.reduce_mean(self.cache[:, self.point], axis=0)
else:
return tf.reduce_mean(self.cache, axis=None)
def reset_states(self):
self.cache.assign(tf.zeros_like(self.cache))
self.counter.assign(0)
if self.percentile is not None and self.point is not None:
self.percentile_idx.assign(tf.cast(self.val_size * self.percentile, dtype=tf.int32))
....
/trinity/home/r084755/DRF_AI/distal-radius-fractures-x-pa-and-lateral-to-clinic/Code files/LandmarkDetection.py:144 update_state
if tf.math.less_equal(self.counter, self.val_size): # Updated from self.counter <= self.val_size:
/opt/ohpc/pub/easybuild/software/TensorFlow/2.2.0-fosscuda-2019b-Python-3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:778 __bool__
self._disallow_bool_casting()
/opt/ohpc/pub/easybuild/software/TensorFlow/2.2.0-fosscuda-2019b-Python-3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:545 _disallow_bool_casting
"using a `tf.Tensor` as a Python `bool`")
/opt/ohpc/pub/easybuild/software/TensorFlow/2.2.0-fosscuda-2019b-Python-3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:532 _disallow_when_autograph_enabled
" decorating it directly with @tf.function.".format(task))
OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did not convert this function. Try decorating it directly with @tf.function.
|
I have a table looks like this:
product price99 price100
A 2 1
B 3 2
..
I don't know how to do that in MySQL to explode that into a format like this, like using the melt and cast function in R.
product quantity_min quantity_max price
A 1 99 2
A 100 999999 1
B 1 99 3
B 100 999999 2
..
I have a feeling that it might need [case][1] statement? but really having a hard time making it work. What can I try next?
[1]: http://dev.mysql.com/doc/refman/5.0/en/case.html |
Explode one line to multiple in MySQL |
Ii am trying to create standard logic app using arm template with workflows in it, but it only creates standard logic app without workflow.At the same time it creates consumption logic app also.
I tried adding microsoft.logic/workflows under resources and added action and trigger inside. But this is not helping |
I am trying create standard logic app with workflows in it using arm template |
|workflow|azure-logic-apps|standards| |
null |
This has been like 4 days I try to fix this issue, I fixed this issue in http but in https it doesn't work (the server and the website are in https)
For the server I use NodeJS, Express, here is the middleware code:
server/api/index.ts
```ts
export default function APIHandler(app: e.Express) {
app.use(
cookieSession({
secret: randomUUID(),
sameSite: "none",
secure: true,
httpOnly: true,
expires: new Date(Date.now() + 2592000000),
})
);
app.enable("trust proxy");
app.use(
cors({
origin: "https://persona-db.vercel.app",
preflightContinue: true,
credentials: true,
allowedHeaders:
"Origin, X-Requested-With, Content-Type, Accept, Set-Cookie",
})
);
// app.use(function (req, res, next) {
// res.header("Set-Cookie", "true");
// res.header("Access-Control-Allow-Origin", req.headers.origin); // update to match the domain you will make the request from
// res.header(
// "Access-Control-Allow-Headers",
// ""
// );
// res.header(
// "Access-Control-Request-Headers",
// "Origin, X-Requested-With, Content-Type, Accept"
// );
// res.header(
// "Access-Control-Allow-Methods",
// "GET,HEAD,OPTIONS,POST,PUT,DELETE"
// );
// res.header("Access-Control-Allow-Credentials", "true");
// next();
// });
app.use("/files", e.static(path.join(__dirname, "files")));
let singlerMulter = multer({ dest: "./api/files" }).single("file");
app.use(singlerMulter);
app.use(e.json());
app.use(e.urlencoded());
app.use(cookieParser());
app.use(logger);
app.get("/", (req, res) => {
res.send("Hello World");
});
app.use("/characters", charactersRoute);
app.use("/users", usersRoute);
}
```
Here is how I set a cookie server-side (POST /users/login):
```ts
route.post("/login", async (req, res) => {
let { username, password } = req.body;
if (!username || !password)
return res
.status(500)
.json({ message: "Username or password not provided" });
let u = await collection.findOne({ username });
if (!u || !compareSync(password, u.password))
return res
.status(404)
.json({ message: "Cannot find account with username and password" });
if (u.suspendedAt) {
// https://stackoverflow.com/questions/66639760/dayjs-diff-between-two-date-in-day-and-hours Answer
const date1 = dayjs(u.suspendedAt);
const date2 = dayjs();
let hours = date2.diff(date1, "hours");
const days = Math.floor(hours / 24);
hours = hours - days * 24;
res.status(403).json({
message: `account suspended, ask the support to recover within ${days} days and ${hours} hours.`,
});
return;
}
res.cookie("token", u.token); // Here is it, and I also tried adding secure/httpOnly params
res.json({ message: "Logged in", token: u.token });
});
```
Client-side, here is my axios request:
/login/+page.svelte
```ts
try {
let result = await axios.post(`${root}/users/login`,{ username: usernameInput.value,password: passwordInput.value},{ withCredentials: true });
writeSuccess(result.data.message);
window.location.href = '/';
} catch (e) {
writeError(e.response.data.message);
console.log(e);
}
```
My website is hosted on Vercel, so it is https, and my server is hosted on a nodejs service, but with cloudflare and a old domain name, it is now not an IP so it's https both the server and client
I really did everything, and still the cookies on the login and register aren't set, this is so annoying, tried everything, the solutions on stackoverflow, nothing worked. So please help me.
If there is Nginx configurations to add server side, I will ask my founder to set it, thank you.
Thanks for your help |
{"Voters":[{"Id":23450026,"DisplayName":"Clayton Manda"}],"DeleteType":1} |
I want to get "VacationCount" from my offday string. I have tried this code, but it does not work. Where is my mistake?
```
val offDaysString = "8888856"
val returnPending = 20 //How many day return Pending
var vacationCount = 0
if (offDaysString != null){
val offDays: Array<String> = offDaysString.toCharArray().map {
it.toString()
}.toTypedArray() // Split all data from a string value
for (i in 0 until returnPending){
cal.add(Calendar.DATE,i.toInt()) //Current Date, Which every loop add one day.
val today = cal.timeInMillis/(1000*60*60*24*7) % 7 //Today Day like sun=0, Mon=1, Tue=2, wed=3
for (j in 0 until offDays.size){
if (offDays[j].toInt() == today.toInt()){ //Compayer All week to
vacationCount++
}
}
}
}
```
I try some for looping patterns but not work.
https://stackoverflow.com/questions/60423341/nested-for-loop-in-kotlin-with-a-starts-at-in-the-inner-loop
|
The best place to do validation would be at the setters function,doing validation at `def __init__()` can be by passed using dot notation
**Example**
Suppose we have a class account that takes in name and amount. The name shouldnt be blank and amount should be negative.
```
class Account:
def __init__(self, name, amount):
# validation code
if not name:
raise ValueError("name shouldnt be blank")
if amount < 0:
raise ValueError("amount shouldnt be negative")
self.name = name
self.amount = amount
```
*Somewhere in our code*
```
savings = Account('locked savings', 100)
```
We can mutate the values stored in savings in such a way that our validation code never runs(since its only executed when instantiating) ie
```
savings.name = ''
savings.amount = -3999
```
To avoid this, just put all the validation code in your setters
|
When you want to get `data-something` in JS both work or when the "data-something" is added in JS with `dataset.something` and want to get it.
But performance getAttribute is much better than dataset ([test performance reference][1])
getAttribute x 122,465 ops/sec ±4.62% (60 runs sampled)
dataset x 922 ops/sec ±0.58% (62 runs sampled)
[1]: https://www.measurethat.net/Benchmarks/Show/14432/0/getattribute-vs-dataset |
**Feb 2024**<br>Obtain an iterator by calling the JsonNode's `iterator()` method, and go on...
JsonNode array = datasets.get("datasets");
if (array.isArray()) {
Iterator<JsonNode> itr = array.iterator();
/* Set up a loop that makes a call to hasNext().
Have the loop iterate as long as hasNext() returns true.*/
while (itr.hasNext()) {
JsonNode item=itr.next();
// do something with array elements
}
} |
I have a very basic requirement on my frontend where I want to update the incoming images every second but I'm not sure of how to do it...
Following is my tag where I want to update the values -
<img id="receivedImage" src="<%= imageData %>" alt="Received Frame">
It receives data from the backend -
app.get('/receiver', function(req, res) {
res.render('receiver', { imageData: receivedFrameData });
});
wss.on('connection', function connection(ws) {
console.log('Client connected.');
ws.on('message', function incoming(imageData) {
console.log('Received frame from client.');
console.log(imageData);
receivedFrameData = imageData;
socketio.emit('frame', imageData); // Emit the frame data using Socket.IO
});
ws.on('close', function close() {
console.log('Client disconnected.');
});
});
Currently I'm refreshing the entire page to re-render the new `src` but that seems to be a very shabby approach. Can someone guide me here ? |
Mysql workbench is not compatible with Mariadb ... and it doesn't work |
|macos|mariadb|mysql-workbench| |
How about this? Not quite as seamless but a bit closer to what you want.
diff = df["Input"] - df["Input"].shift(1)
diff.columns = pd.MultiIndex.from_product([["Diff"], diff.columns])
df = pd.concat([df, diff], axis=1)
Regarding the second part of your question (really a separate question), the problem is that `loc` takes an array, list, or scalar, not a DataFrame object. You can thus make it work by casting to numpy:
for g in df.groupby(("Meta","ID")):
df.loc[g[1].index, "Diff"] = (g[1]["Input"] - g[1]["Input"].shift(1)).to_numpy()
(You can also use `values` instead of `to_numpy()`, but it is not recommended, see [here](https://stackoverflow.com/a/54324513/6220759) as to why.)
|
For such a query:
```
select
s.start_time::timestamptz at time zone 'America/New_York' as t1,
s.start_time::timestamptz at time zone 'UTC' as t2,
s.start_time::timestamptz at time zone 'UTC' at time zone 'America/New_York' as t3,
s.start_time
from
my_table s
order by
id desc;
```
I get results like this:
[![enter image description here][1]][1]
As we can see, it properly returns -5 for `t1` and 0 for `t2` (my diff is +1), but result in `t3` is strange for me. What's happening there? I thought that it should be equal to `t1`, but it's not.
[1]: https://i.stack.imgur.com/4twFT.png |
`::timestamptz at time zone <zone1> at time zone <zone2>` returns wrong result |
|timezone|psql| |
Here is my solution, first sort the array as then call the recursive function.
public static void main(String[] args) {
int[] input = new int[]{2, 7, 11, 15};
Arrays.sort(input);
System.out.println(twoSumWithRecursion(input, 99, 0, input.length-1));
}
private static String twoSumWithRecursion(int[] input, int target, int i, int j) {
if (i == input.length-1 || j == 0) {
return "Target not found";
}
if (input[i] + input[j] == target) {
return String.format("Two numbers: %s %s", input[i], input[j]);
}
if (input[i] + input[j] < target) {
return twoSumWithRecursion(input, target, i+1, j);
}
return twoSumWithRecursion(input, target, i, j-1);
} |
I am deploying ml model using Gradio, after deploying on Gradio I get error after I input and output shows error
code for this output is
```
import gradio as gr
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import StandardScaler, MinMaxScaler
# Sample data (replace this with your actual data)
X_train = np.array([[230.1, 37.8, 69.2],
[44.5, 39.3, 45.1],
[17.2, 45.9, 69.3],
[151.5, 41.3, 58.5],
[180.8, 10.8, 58.4]])
y_train = np.array([22.1, 10.4, 9.3, 18.5, 12.9])
# Initialize and train your Linear Regression model
scaler = MinMaxScaler()
scaler.fit(X_train)
X_train_scale = scaler.transform(X_train)
lm = LinearRegression()
lm.fit(X_train_scale, y_train)
# Define prediction function
def predict_sales(tv, radio, newspaper):
# Scale the input features
input_features = scaler.transform([[tv, radio, newspaper]])
# Predict sales
prediction = lm.predict(input_features)
return prediction[0]
# Create Gradio Interface
tv_input = gr.Number(label="TV")
radio_input = gr.Number(label="Radio")
newspaper_input = gr.Number(label="Newspaper")
output_text = gr.Textbox(label="Predicted Sales")
gr.Interface(fn=predict_sales,
inputs=[tv_input, radio_input, newspaper_input],
outputs=output_text,
title="Sales Prediction",
description="Enter advertising expenses to predict sales",
debug=True,enable_queue=True).launch()
```
]
[![enter image description here][1]][1]
How do I solve this error?
[1]: https://i.stack.imgur.com/2ZQBj.png |
How to continuously update HTML tag to get values from EJS and backend? |
|html|node.js|ejs| |
There is no difference.
`waitKey()`'s first argument's default is 0. If you call the function without arguments, the default argument is used.
Nothing in C++ or Python *forbids* you from passing values equal to the defaults. It is not an error and not a problem. It is a matter of style. |
If you are looking to have a single producer of some data and multiple "consumers" of that data, then the following code uses `Condition` instances with notification. For demo purposes the producer only produces 10 pieces of data and we have 3 consumers. This will keep the output to a reasonable length:
**Update Using Linked List**
```python
import threading
class Node:
def __init__(self, data, previous_node):
self.consumed_count = 0
self.next = None
self.data = data
if previous_node:
previous_node.next = self
def __repr__(self):
return f'[data: {repr(self.data)}, consumed_count: {self.consumed_count}, next: {self.next}]'
class data_provider:
def __init__(self, num_consumers) -> None:
self.num_consumers = num_consumers
self.lock = threading.Lock()
self.condition = threading.Condition()
self.running = True
# To simplify the code, the first node in the list is a dummy:
self.linked_list = Node(None, None)
data_generator_thread = threading.Thread(target=self.data_generator)
data_generator_thread.start()
def data_generator(self):
import time
last_node = self.linked_list
for cnt in range(1, 11) : # Reduced count for demo purposes
# For demo purposes let's introduce a pause:
time.sleep(.5)
last_node = Node({'Name': 'Data', 'Count': cnt}, last_node)
with self.condition:
self.condition.notify_all()
print('Done producing')
# Let consumers know that no more data will be coming:
with self.condition:
self.running = False
self.condition.notify_all()
def remove_consumed_nodes(self):
with self.lock:
# Remove completely consumed links except for the last one:
prev_node = self.linked_list.next
node = prev_node.next
while node and node.consumed_count == self.num_consumers:
prev_node = node
node = node.next
self.linked_list.next = prev_node
N_PRINTERS = 3 # The number of printer threads:
obj = data_provider(N_PRINTERS)
def printing(id):
last_node = obj.linked_list
while True:
with obj.condition:
obj.condition.wait_for(
lambda: not obj.running or last_node.next
)
if not last_node.next:
return
last_node = last_node.next
while True:
print(id, ':', last_node.data)
with obj.lock:
last_node.consumed_count += 1
if not last_node.next:
break
last_node = last_node.next
obj.remove_consumed_nodes()
printer_threads = []
for i in range(N_PRINTERS):
thread = threading.Thread(target=printing, args=(i,))
thread.start()
printer_threads.append(thread)
for thread in printer_threads:
thread.join()
print('End')
print(obj.linked_list)
```
Prints:
```lang-None
1 : {'Name': 'Data', 'Count': 1}
0 : {'Name': 'Data', 'Count': 1}
2 : {'Name': 'Data', 'Count': 1}
0 : {'Name': 'Data', 'Count': 2}
1 : {'Name': 'Data', 'Count': 2}
2 : {'Name': 'Data', 'Count': 2}
0 : {'Name': 'Data', 'Count': 3}
2 : {'Name': 'Data', 'Count': 3}
1 : {'Name': 'Data', 'Count': 3}
0 : {'Name': 'Data', 'Count': 4}
1 : {'Name': 'Data', 'Count': 4}
2 : {'Name': 'Data', 'Count': 4}
0 : {'Name': 'Data', 'Count': 5}
1 : {'Name': 'Data', 'Count': 5}
2 : {'Name': 'Data', 'Count': 5}
0 : {'Name': 'Data', 'Count': 6}
2 : {'Name': 'Data', 'Count': 6}
1 : {'Name': 'Data', 'Count': 6}
0 : {'Name': 'Data', 'Count': 7}
1 : {'Name': 'Data', 'Count': 7}
2 : {'Name': 'Data', 'Count': 7}
2 : {'Name': 'Data', 'Count': 8}
0 : {'Name': 'Data', 'Count': 8}
1 : {'Name': 'Data', 'Count': 8}
2 : {'Name': 'Data', 'Count': 9}
0 : {'Name': 'Data', 'Count': 9}
1 : {'Name': 'Data', 'Count': 9}
Done producing
2 : {'Name': 'Data', 'Count': 10}
0 : {'Name': 'Data', 'Count': 10}
1 : {'Name': 'Data', 'Count': 10}
End
[data: None, consumed_count: 0, next: [data: {'Name': 'Data', 'Count': 10}, consumed_count: 3, next: None]]
```
**Reusable MultiConsumerProducer Class**
The above code can be re-engineered for improved reusability.
```python
import threading
from typing import Iterable, List, Any
class MultiConsumerProducer:
class Node:
def __init__(self, data: Any, previous_node: 'Node'):
self._consumed_count = 0
self._next = None
self._data = data
if previous_node:
previous_node._next = self
@property
def data(self) -> Any:
return self._data
def __repr__(self):
return f'[_data: {repr(self._data)}, _consumed_count: {self._consumed_count}, _next: {self._next}]'
def __init__(self, num_consumers: int, data_collection: Iterable) -> None:
self._num_consumers = num_consumers
self._lock = threading.Lock()
self._condition = threading.Condition()
self._running = True
# To simplify the code, the first node in the list is a dummy:
self._linked_list = MultiConsumerProducer.Node(None, None)
threading.Thread(target=self._data_generator, args=(data_collection,), daemon=True).start()
def print_nodes(self) -> None:
"""Print linked list of nodes."""
print(self._linked_list)
def _data_generator(self, data_collection):
"""Generate nodes."""
last_node = self._linked_list
for data in data_collection:
last_node = MultiConsumerProducer.Node(data, last_node)
with self._condition:
self._condition.notify_all()
self._running = False
with self._condition:
self._condition.notify_all()
def get_next_nodes(self, last_node_processed: Node=None) -> List[Node]:
"""Get next list of ready nodes."""
last_node = last_node_processed or self._linked_list
with self._condition:
self._condition.wait_for(
lambda: not self._running or last_node._next
)
if not last_node._next:
return []
nodes = []
last_node = last_node._next
while True:
nodes.append(last_node)
if not last_node._next:
return nodes
last_node = last_node._next
def consumed_node(self, node: Node) -> None:
"""Show node has been consumed."""
with self._lock:
node._consumed_count += 1
if node._consumed_count == self._num_consumers:
# Remove completely consumed links except for the last one:
prev_node = self._linked_list._next
node = prev_node._next
while node and node._consumed_count == self._num_consumers:
prev_node = node
node = node._next
self._linked_list._next = prev_node
##############################################################
def producer():
import time
for cnt in range(1, 11) : # Reduced count for demo purposes
# For demo purposes let's introduce a pause:
time.sleep(.5)
yield {'Name': 'Data', 'Count': cnt}
print('Done producing')
N_PRINTERS = 3 # The number of printer threads:
obj = MultiConsumerProducer(N_PRINTERS, producer())
def printing(id):
last_node_processed = None
while (nodes := obj.get_next_nodes(last_node_processed)):
for last_node_processed in nodes:
print(id, ':', last_node_processed.data)
obj.consumed_node(last_node_processed)
printer_threads = []
for i in range(N_PRINTERS):
thread = threading.Thread(target=printing, args=(i,))
thread.start()
printer_threads.append(thread)
for thread in printer_threads:
thread.join()
print('End')
print('\nNodes:')
obj.print_nodes()
```
**One More Time**
The following function generates an abstract base class that uses queues for delivering work and supports producer/consumers running in either threads or processes
```python
def generate_multi_consumer_producer(n_consumers, use_multiprocessing: bool=False, queue_size=0):
"""Generate an abstract base for single producer multiple consumers.
n_consumers: The number of consumers.
use_multiprocessing: True to use producer/consumers that run in child processes
otherwise child threads are used.
queue_size: If producing is faster than consumption, you can specify a
a positive value for queue_size to prevent the queues from conyinuously
growing."""
from abc import ABC, abstractmethod
from typing import List, Iterable
if use_multiprocessing:
from multiprocessing import Process as ExecutionClass, JoinableQueue as QueueClass
else:
from threading import Thread as ExecutionClass
from queue import Queue as QueueClass
class MultiConsumerProducer(ABC):
def __init__(self, n_consumers: int=n_consumers, queue_size=queue_size) -> None:
self._n_consumers = n_consumers
self._queues = [QueueClass(queue_size) for _ in range(n_consumers)]
self._producer = ExecutionClass(target=self._run)
self._producer.start()
def _run(self):
# Start the consumers:
for consumer_id in range(self._n_consumers):
ExecutionClass(
target=self._consumer,
args=(consumer_id, self._queues[consumer_id]),
daemon=True
).start()
# Produce the data
for data in self.produce():
for queue in self._queues:
queue.put(data)
# Wait for all work to be completed
for queue in self._queues:
queue.join()
def join(self) -> None:
"""Wait for all work to complete."""
self._producer.join()
def _consumer(self, consumer_id: int, queue: QueueClass):
while True:
data = queue.get()
try:
self.consume(consumer_id, data)
except Exception as e:
print(f'Exception in consumer {consumer_id}: {e}')
finally:
queue.task_done()
@abstractmethod
def produce(self):
"""This should be a generator function."""
pass
@abstractmethod
def consume(self, consumer_id: int, data: object) -> None:
pass
return MultiConsumerProducer
```
The following is an example usage where I have 3 existing consumer functions and an existing producer function showing how you could use them without modification:
```python
def consumer_0(consumer_id, n):
print(f'id {consumer_id}: {n} ** 1 = {n}')
def consumer_1(consumer_id, n):
print(f'id {consumer_id}: {n} ** 2 = {n ** 2}')
def consumer_2(consumer_id, n):
print(f'id {consumer_id}: {n} ** 3 = {n ** 3}')
def producer():
import time
for n in range(1, 11) : # Reduced count for demo purposes
# For demo purposes let's introduce a pause:
time.sleep(.5)
yield n
print('Done producing', flush=True)
MultiConsumerProducer = generate_multi_consumer_producer(3, use_multiprocessing=True)
class MyMultiConsumerProducer(MultiConsumerProducer):
"""An example that uses 3 different consumers."""
consumers = [consumer_0, consumer_1, consumer_2]
def produce(self):
yield from producer()
def consume(self, consumer_id, data):
self.consumers[consumer_id](consumer_id, data)
if __name__ == '__main__':
p = MyMultiConsumerProducer(3)
# Wait for all work to complete:
p.join()
```
Prints:
```lang-None
id 0: 1 ** 1 = 1
id 1: 1 ** 2 = 1
id 2: 1 ** 3 = 1
id 0: 2 ** 1 = 2
id 2: 2 ** 3 = 8
id 1: 2 ** 2 = 4
id 0: 3 ** 1 = 3
id 2: 3 ** 3 = 27
id 1: 3 ** 2 = 9
id 2: 4 ** 3 = 64
id 1: 4 ** 2 = 16
id 0: 4 ** 1 = 4
id 0: 5 ** 1 = 5
id 1: 5 ** 2 = 25
id 2: 5 ** 3 = 125
id 0: 6 ** 1 = 6
id 1: 6 ** 2 = 36
id 2: 6 ** 3 = 216
id 0: 7 ** 1 = 7
id 1: 7 ** 2 = 49
id 2: 7 ** 3 = 343
id 0: 8 ** 1 = 8
id 1: 8 ** 2 = 64
id 2: 8 ** 3 = 512
id 0: 9 ** 1 = 9
id 2: 9 ** 3 = 729
id 1: 9 ** 2 = 81
Done producing
id 2: 10 ** 3 = 1000
id 1: 10 ** 2 = 100
id 0: 10 ** 1 = 10
``` |
Update
==
Per the comments, OP clarifies:
> Yes, I want to capture any paragraph that contains only emphasized text, is it enclosed in one or more `em` tags.
Therefore, this updated XPath,
//p[em][not(node()[not(self::em)])]
will select all `p` elements with one or more `em` child elements, but no other children of any sort — only fully emphasized paragraphs.
---
Old answer
==
This XPath,
//p[count(node())=1][em]
will select all `p` elements with a single child node that is a `em` element.
---
Explanation
---
- `//p` selects all `p` elements in the document.
- `[count(node())=1]` filters to only those `p` elements that have a single child `node()`. Since `node()` matches nodes of *any* type (including both element nodes and text nodes), it will ensure that only `p` elements with a single child of any type are selected.
- `[em]` filters to only those single-child `p` elements that have a `em` element child.
Therefore, for your input XML/HTML, only the targeted `p`,
<p><em>The paragraph I want to capture</em></p>
will be selected. Had there been another `p` with three `em` children,
<p><em>Do</em><em>not</em><em>select</em></p>
or one `em` child and other element children,
<p><em>Do</em><sup>not</sup><sub>select!</sub><span> or else!</span></p>
such `p` elements would *not* have been selected.
**Warning**: The XPath in the currently accepted answer, `//p[not(text())][em]`, however, would select such `p` elements, which did not appear to me to be your intention.
---
|
It seems to me that it's not supported directly. You will have to deploy it then use the endpoint to generate predictions like described in [this notebook.][1]
instances = [
{
"prompt": "My favourite condiment is",
"n": 1,
"max_tokens": 200,
"temperature": 1.0,
"top_p": 1.0,
"top_k": 10,
},
]
response = endpoint.predict(instances=instances)
for prediction in response.predictions:
print(prediction)
You can also use the HuggingFace transformers library if you want to use it locally. An example is provided in the same notebook.
[1]: https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_mistral.ipynb |
there is an issue in Django with the LogoutView. when I try to enter the link 'accounts/logout' django server pops in the cmd:
Method Not Allowed (GET): /accounts/logout/
Method Not Allowed: /accounts/logout/
[24/Feb/2024 13:48:11] "GET /accounts/logout/ HTTP/1.1" 405 0
this is the view file content:
from django.urls import path, include
from django.contrib.auth import views as auth_views
from .views import showProfile, logout_user
app_name = 'user'
urlpatterns = [
path('logout/', auth_views.LogoutView.as_view(template_name='registration/logged_out.html'), name='logout'),
path('', include('django.contrib.auth.urls')),
]
and this is the 'registration/logged_out.html' file content:
{% extends "generic_base.html" %}
{% block content %}
<form method="post" action="{% url 'user:logout' %}">
{% csrf_token %}
<button type="submit">Logout</button>
</form>
{% endblock content %}
the templates file is located in the application 'accounts'. and the `app_name = 'user'`
and this is the generic_base.html template content:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
{% block title %}
<title>Donations</title>
{% endblock title %}
</head>
<body>
{% block content %}
{% endblock content %}
</body>
</html>
I tried a lot of ways to fix this issue, but none of them worked.
I also saw all the provided solutions in Stack Overflow about this problem, but none of them worked. I also tried the solutions in [Django built in Logout view `Method Not Allowed (GET): /users/logout/`][1] but this didn't solve the problem.
[1]: https://stackoverflow.com/questions/77690729/django-built-in-logout-view-method-not-allowed-get-users-logout |
Tried everything, the cookies don't set but I have the right headers, cors, things, fetch parameter, etc |
|node.js|express|cookies|axios|cors| |
null |
It seems you are using the `lombok-maven-plugin` to perform the delombok task during a Maven build. That is a third-party plugin not maintained by the lombok team, and it has not been updated for several lombok versions. The latest plugin version 1.18.20.0 uses lombok 1.18.20, which does *not* work for JDK 17 or JDK 21.
However, you can easily instruct `lombok-maven-plugin` to use a newer lombok version via `pom.xml`:
```
<build>
<plugins>
<plugin>
<groupId>org.projectlombok</groupId>
<artifactId>lombok-maven-plugin</artifactId>
<version>1.18.20.0</version>
<dependencies>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.18.30</version>
</dependency>
</dependencies>
</plugin>
</plugins>
</build>
``` |
We have ToggleButton iniherited from Button which contains property like Content and IsChecked. It is used for having checked and unchecked states for buttons.
Is there a way to handle Toggle buttons in Maui?
I have tried using RadioButtons which indirectly handles ToggleButtons, I need a direct way to handle toggle buttons
<ToggleButton Style="...." Content="Click me" IsChecked="true"/>
|
How to achieve ToggleButton in WPF using Maui |
|wpf|maui|maui-community-toolkit|maui-windows| |
null |
I installed and configure the SQL Server Reporting Service 2022 on a Windows Server 2022. However, upon accessing the report server portal (native mode configured) the same admin account used to install the Reporting Service is denied access to the ReportServer configurations page where I can add users and set permissions accordingly, i.e. https://reportserver/ReportServer. The following error is displayed on the web page, running Edge as an administrator and without administrator, also on IE. Error **"The permissions granted to user 'Domain\UserName' are insufficient for performing this operation. (rsAccessDenied) Get Online Help"**. Please advise what could be the issue. The account is also a sysadmin on sql and it has RSExcute for both ReportServer and ReportServerTemp. |
How to resolve the reAccessDenied from SQL Server Reporting Service |
|sql|sql-server|reporting-services| |
No class is supplied in `java.time.*` to do this. But extra things that did not fit in the JDK are available in [ThreeTen-Extre][1]. In this case, the class `PerdodDuration` is appropriate, which combines a `Period` and a `Duration`. (The two concepts are sufficiently different, particularly wrt days, that they are generally best treated separately.)
[1]: https://www.threeten.org/threeten-extra/ |
This was working on Ubuntu until recently.
I have had a dual 32/64 bit development environment for several year and recently my 32 bit apps have stopped running.
This seems to fail sometimes because the binfmt-support service fails to start. (However even on new clean install of Ubuntu where the service does not fail 32 bit binaries refuse to run.)
Here is the error:
The job identifier is 202.
Feb 13 14:59:06 WZ-M18xR2 update-binfmts[864]: update-binfmts: warning: unable to close /proc/sys/fs/binfmt_misc/register: Invalid argument
Feb 13 14:59:06 WZ-M18xR2 update-binfmts[864]: update-binfmts: exiting due to previous errors
Feb 13 14:59:06 WZ-M18xR2 systemd[1]: binfmt-support.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ An ExecStart= process belonging to unit binfmt-support.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 2.
Feb 13 14:59:06 WZ-M18xR2 systemd[1]: binfmt-support.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ The unit binfmt-support.service has entered the 'failed' state with result 'exit-code'.
Feb 13 14:59:06 WZ-M18xR2 systemd[1]: Failed to start Enable support for additional executable binary formats.
Finding another old reference that this might be caused by a conflict with systemd-binfmt I stopped systemd-binfmt and tried to restart binfmt-support but still the same error.
However even on a clean built system without this error the 32 bit binaries refuse to run with this error:
bash: ./my32app: cannot execute binary file: Exec format error
If I try this:
/lib32/ld-linux.so.2 ./my32app
./my32app: error while loading shared libraries: ./dcc: cannot open shared object file: No such file or directory
or this:
usr/bin/qemu-i386-static ./dcc
qemu-i386-static: ./my32app: Invalid ELF image for this architecture
readelf show the only dependency is:
readelf -d ./my32app | grep NEEDED
0x00000001 (NEEDED) Shared library: [libc.so.6]
And libc.so.6 can definitely be found in /lib32
And here is a list of all the installed i386 packages:
dpkg -l | awk '/^ii/ && $4 == "i386" { print }'
ii gcc-12-base:i386 12.3.0-1ubuntu1~22.04 i386 GCC, the GNU Compiler Collection (base package)
ii libblkid1:i386 2.37.2-4ubuntu3 i386 block device ID library
ii libbz2-1.0:i386 1.0.8-5build1 i386 high-quality block-sorting file compressor library - runtime
ii libc6:i386 2.35-0ubuntu3.6 i386 GNU C Library: Shared libraries
ii libcap2:i386 1:2.44-1ubuntu0.22.04.1 i386 POSIX 1003.1e capabilities (library)
ii libcom-err2:i386 1.46.5-2ubuntu1.1 i386 common error description library
ii libcrypt1:i386 1:4.4.27-1 i386 libcrypt shared library
ii libdb5.3:i386 5.3.28+dfsg1-0.8ubuntu3 i386 Berkeley v5.3 Database Libraries [runtime]
ii libdbus-1-3:i386 1.12.20-2ubuntu4.1 i386 simple interprocess messaging system (library)
ii libgamemode0:i386 1.6.1-1build2 i386 Optimise Linux system performance on demand (host library)
ii libgamemodeauto0:i386 1.6.1-1build2 i386 Optimise Linux system performance on demand (client library)
ii libgcc-s1:i386 12.3.0-1ubuntu1~22.04 i386 GCC support library
ii libgcrypt20:i386 1.9.4-3ubuntu3 i386 LGPL Crypto library - runtime library
ii libgmp10:i386 2:6.2.1+dfsg-3ubuntu1 i386 Multiprecision arithmetic library
ii libgpg-error0:i386 1.43-3 i386 GnuPG development runtime library
ii libgpm2:i386 1.20.7-10build1 i386 General Purpose Mouse - shared library
ii libgssapi-krb5-2:i386 1.19.2-2ubuntu0.3 i386 MIT Kerberos runtime libraries - krb5 GSS-API Mechanism
ii libidn2-0:i386 2.3.2-2build1 i386 Internationalized domain names (IDNA2008/TR46) library
ii libk5crypto3:i386 1.19.2-2ubuntu0.3 i386 MIT Kerberos runtime libraries - Crypto Library
ii libkeyutils1:i386 1.6.1-2ubuntu3 i386 Linux Key Management Utilities (library)
ii libkrb5-3:i386 1.19.2-2ubuntu0.3 i386 MIT Kerberos runtime libraries
ii libkrb5support0:i386 1.19.2-2ubuntu0.3 i386 MIT Kerberos runtime libraries - Support library
ii liblz4-1:i386 1.9.3-2build2 i386 Fast LZ compression algorithm library - runtime
ii liblzma5:i386 5.2.5-2ubuntu1 i386 XZ-format compression library
ii libmount1:i386 2.37.2-4ubuntu3 i386 device mounting library
ii libncurses5:i386 6.3-2ubuntu0.1 i386 shared libraries for terminal handling (legacy version)
ii libncurses6:i386 6.3-2ubuntu0.1 i386 shared libraries for terminal handling
ii libncursesw6:i386 6.3-2ubuntu0.1 i386 shared libraries for terminal handling (wide character support)
ii libnsl2:i386 1.3.0-2build2 i386 Public client interface for NIS(YP) and NIS+
ii libnss-nis:i386 3.1-0ubuntu6 i386 NSS module for using NIS as a naming service
ii libnss-nisplus:i386 1.3-0ubuntu6 i386 NSS module for using NIS+ as a naming service
ii libpcre2-8-0:i386 10.39-3ubuntu0.1 i386 New Perl Compatible Regular Expression Library- 8 bit runtime files
ii libpcre3:i386 2:8.39-13ubuntu0.22.04.1 i386 Old Perl 5 Compatible Regular Expression Library - runtime files
ii libselinux1:i386 3.3-1build2 i386 SELinux runtime shared libraries
ii libssl3:i386 3.0.2-0ubuntu1.14 i386 Secure Sockets Layer toolkit - shared libraries
ii libstdc++6:i386 12.3.0-1ubuntu1~22.04 i386 GNU Standard C++ Library v3
ii libsystemd0:i386 249.11-0ubuntu3.11 i386 systemd utility library
ii libtinfo5:i386 6.3-2ubuntu0.1 i386 shared low-level terminfo library (legacy version)
ii libtinfo6:i386 6.3-2ubuntu0.1 i386 shared low-level terminfo library for terminal handling
ii libtirpc3:i386 1.3.2-2ubuntu0.1 i386 transport-independent RPC library
ii libudev1:i386 249.11-0ubuntu3.11 i386 libudev shared library
ii libunistring2:i386 1.0-1 i386 Unicode string library for C
ii libuuid1:i386 2.37.2-4ubuntu3 i386 Universally Unique ID library
ii libzstd1:i386 1.4.8+dfsg-3build1 i386 fast lossless compression algorithm
ii zlib1g:i386
Looking in /usr/share/binfmt I do not see an format for ELF 32 bit support:
ls /usr/share/binfmts/
cli qemu-arm qemu-mips64el qemu-s390x
jar qemu-armeb qemu-mipsel qemu-sh4
llvm-11-runtime.binfmt qemu-cris qemu-mipsn32 qemu-sh4eb
llvm-12-runtime.binfmt qemu-hexagon qemu-mipsn32el qemu-sparc
llvm-14-runtime.binfmt qemu-hppa qemu-ppc qemu-sparc32plus
python2.7 qemu-m68k qemu-ppc64 qemu-sparc64
python3.10 qemu-microblaze qemu-ppc64le qemu-xtensa
qemu-aarch64 qemu-mips qemu-riscv32 qemu-xtensaeb
qemu-alpha qemu-mips64 qemu-riscv64
So I tried adding support from this post (https://stackoverflow.com/questions/36665669/trying-and-failing-to-run-hello-world-32-bit-c-program-on-64-bit-ubuntu-on-w)
sudo apt install qemu-user-static
sudo update-binfmts --install i386 /usr/bin/qemu-i386-static --magic '\x7fELF\x01\x01\x01\x03\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x03\x00\x01\x00\x00\x00' --mask '\xff\xff\xff\xff\xff\xff\xff\xfc\xff\xff\xff\xff\xff\xff\xff\xff\xf8\xff\xff\xff\xff\xff\xff\xff'
sudo service binfmt-support start
But that didn't work either. Looking at the hexdump of one of the 32 binaries didn't convince me that the magic value was correct!
hexdump ./dcc | head
0000000 457f 464c 0101 0001 0000 0000 0000 0000
0000010 0003 003e 0001 0000 1300 0000 0034 0000
0000020 7320 0000 0000 0000 0034 0020 000c 0028
0000030 0025 0024 0006 0000 0034 0000 0034 0000
0000040 0034 0000 0180 0000 0180 0000 0004 0000
0000050 0004 0000 0003 0000 01b4 0000 01b4 0000
0000060 01b4 0000 001a 0000 001a 0000 0004 0000
0000070 0001 0000 0001 0000 0000 0000 0000 0000
0000080 0000 0000 0908 0000 0908 0000 0004 0000
0000090 1000 0000 0001 0000 1000 0000 1000 0000
So I am unable to determine how to get 32 bit binaries (that previously worked) to run under Ubuntu.
Any ideas appreciated.
|
I encountered an issue while deploying my Next.js application, and I'm seeking assistance in resolving it. The deployment process fails with the following error:
```
RangeError: Maximum call stack size exceeded
at RegExp.exec (<anonymous>)
at create (/vercel/path0/node_modules/next/dist/compiled/micromatch/index.js:15:18889)
at create (/vercel/path0/node_modules/next/dist/compiled/micromatch/index.js:15:18918)
at parse.fastpaths (/vercel/path0/node_modules/next/dist/compiled/micromatch/index.js:15:18997)
at picomatch.makeRe (/vercel/path0/node_modules/next/dist/compiled/micromatch/index.js:15:21635)
at picomatch (/vercel/path0/node_modules/next/dist/compiled/micromatch/index.js:15:19637)
at /vercel/path0/node_modules/next/dist/compiled/micromatch/index.js:15:19294
at Array.map (<anonymous>)
at picomatch (/vercel/path0/node_modules/next/dist/compiled/micromatch/index.js:15:19286)
at micromatch.isMatch (/vercel/path0/node_modules/next/dist/compiled/micromatch/index.js:15:1090)
Error: Command "npm run build" exited with 1
````
Despite thorough investigation, I haven't been able to pinpoint the source of this error. Here are some details about my setup:
Technology Stack: I'm using Next.js for building the application.
Deployment Platform: I'm deploying the application on [mention the deployment platform].
Dependencies: Here is a list of the dependencies used in my project ([provide the list of dependencies]).
I've tried enabling debugging options (NODE_DEBUG=regex npm run build), but it didn't provide any additional insights into the issue.
Any suggestions or insights into what might be causing this error would be greatly appreciated. Thank you in advance for your assistance. |
null |
I have a text file with a base64 encoded value, and I tried to decode that text file and save it into a PDF format.
Below is my code
File inputfile = new File('/Users/Downloads/DownloadedPDF.txt')
File outputfile = new File('/Users/Downloads/test64.pdf')
byte[] data
try {
FileInputStream fileInputStream = new FileInputStream(inputfile)
data = new byte[(int) inputfile.length()]
fileInputStream.read(data)
fileInputStream.close()
}catch(Exception e) {
e.getStackTrace()
}
//Decode the data
byte[] decoded = java.util.Base64.getDecoder().decode(data)
try {
FileOutputStream fileOutputStream = new FileOutputStream(outputfile)
//Write the decoded details to the output file [pdf format]
fileOutputStream.write(decoded)
fileOutputStream.flush()
fileOutputStream.close()
}catch(Exception e) {
e.getStackTrace()
}
While executing the code I received the below error.
java.lang.IllegalArgumentException: Illegal base64 character 3
at java_util_Base64$Decoder$decode$0.call(Unknown Source)
i tried the below one using the URL decoder, but still received the same error.
byte[] decoded = Base64.getUrlDecoder().decode(data) |
Base 64 : Illegal base64 character 3 Exception |
|java|base64|decode|base64url| |
create a file name setup.py
paste the below code:
```
import subprocess
import sys
from git import Repo
print("Cloning repo...")
git_url = "https://github.com/Shubham654/Tensorflow-examples.git"
repo_dir = "tensorflow-model-maker"
try:
Repo.clone_from(git_url, repo_dir)
except:
pass
subprocess.check_call([sys.executable, "-m", "pip", "install", "-r", "./tensorflow-model-maker/tensorflow_examples/lite/model_maker/requirements.txt"])
```
run ```pip install GitPython``` for git module and then
run ```python setup.py``` file and done.
now you can import
```
from tflite_model_maker import model_spec
from tflite_model_maker import image_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.image_classifier import DataLoader
```
|
What you can do is make a red one in illustrator and then create a script in your DOM file
<script>
const myPng = 'path/to/png';
const redPng = 'path/to/redPng';
// get the node (say foo) you want this change to take effect
const foobar = document.querySelector('foo')
// for on focus you have to keep state
let isFocused = false;
foobar.addEventListener('mouseover', onMouseOver);
foobar.addEventListener('mouseout', onMouseOut);
foobar.addEventListener('focus', onFocus);
foobar.addEventListener('blur', onBlur);
// if you want to have focus event
function onFocus() {
this.querySelector('img').src = redPng;
isFocused = true;
}
function onBlur() {
isFocused = false;
onMouseOut.call(this);
}
// if you want to hover
function onMouseOver() {
if (!isFocused) {
this.querySelector('img').src = redPng;
}
}
function onMouseOut() {
if (!isFocused) {
this.querySelector('img').src = myPng;
}
}
</script>
if you dont intend to focus the png then remove focus related functionality
<script>
foobar.addEventListener('mouseover', onMouseOver);
foobar.addEventListener('mouseout', onMouseOut);
function onMouseOver() {
this.querySelector('img').src = redPng;
}
function onMouseOut() {
this.querySelector('img').src = myPng;
}
</script>
this assumes that you have 2 png files made in illustrator and an html file. |
A Big Issue in django LoginView |
|python|django|django-views|django-forms| |
My document for content is a php document (contact.php). I have a form submission to email. However, I am not knowledgable in php. I have the form on the intial webpage, however, after submitting an invalid error, I get the text that I want, but the form, button, text and footer all disppear. How do I keep it on the new webpages after submission? Basically have it loop for each invalid submission, always allowing the user to try to correct the errors.
Here is my code for both html and php on a php document:
```
<h2>Contact</h2>
<div id="red-error">
<?php
if (isset($_POST['Email'])) {
// EDIT THE 2 LINES BELOW AS REQUIRED
$email_to = "ArthurielBoston@gmail.com";
$email_subject = "Actor Portfolio: email submission";
function problem($error)
{
echo "We are very sorry, but there were error(s) found with the form you submitted. ";
echo "These errors appear below.<br><br>";
echo $error . "<br><br>";
die();
}
// validation expected data exists
if (
!isset($_POST['Name']) ||
!isset($_POST['Email']) ||
!isset($_POST['Message'])
) {
problem('We are sorry, but there appears to be a problem with the form you submitted.');
}
$name = $_POST['Name']; // required
$email = $_POST['Email']; // required
$message = $_POST['Message']; // required
$error_message = "";
$email_exp = '/^[A-Za-z0-9._%-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}$/';
if (!preg_match($email_exp, $email)) {
$error_message .= 'The Email address you entered does not appear to be valid.<br>';
}
$string_exp = "/^[A-Za-z .'-]+$/";
if (!preg_match($string_exp, $name)) {
$error_message .= 'The Name you entered does not appear to be valid.<br>';
}
if (strlen($message) < 2) {
$error_message .= 'The Message you entered do not appear to be valid.<br>';
}
if (strlen($error_message) > 0) {
problem($error_message);
}
$email_message = "Form details below.\n\n";
function clean_string($string)
{
$bad = array("content-type", "bcc:", "to:", "cc:", "href");
return str_replace($bad, "", $string);
}
$email_message .= "Name: " . clean_string($name) . "\n";
$email_message .= "Email: " . clean_string($email) . "\n";
$email_message .= "Message: " . clean_string($message) . "\n";
// create email headers
$headers = 'From: ' . $email . "\r\n" .
'Reply-To: ' . $email . "\r\n" .
'X-Mailer: PHP/' . phpversion();
@mail($email_to, $email_subject, $email_message, $headers);
function valid_user_data() {
// validate $_POST data
// ...
return TRUE;
}
if(valid_user_data()) {
header('Location: https://arthurielbostonactorportfolio.com/successful_submission.html');
}
}
?>
</div>
<h3>Email:</h3>
<form class="form" method="post" action="contact.php">
<label for="Name" class="emailtext"></label>
<input type="text" name="Name" placeholder="Name: (ex: John Doe)"/>
<br>
<label for="Email" class="emailtext"></label>
<input type="text" name="Email" placeholder="Email: (ex: JohnDoe@email.com)"/>
<label for="Message" class="emailtext"></label>
<textarea type="text" name="Message" class="message" placeholder="Message: (ex: Here is my email message.)"></textarea>
<input type="submit" class="button" value="Submit"/>
</form>
<h3>Phone:</h3> <br>
<p class="paragraph1">+1 ( 610 ) 809-8699</p>
<h3>Avaliable Locations To Work:</h3>
<p class="paragraph1">Deleware County, PA
Philadelphia, PA <br>
King of Prussia, PA <br>
New Jersey <br>
Deleware <br>
New York City, NY </p>
</div>
</body>
<footer>
© 2024 Arthuriel Boston. All rights reserved.
</footer>
</html>
``` |
So, it works perfectly fine when I open it, but when I click on it (on the game, not on the window, I can even move the window and it still working) it just stops working
And, if I do it while the character is walking, it just keeps on walking forever. Can someone help me?
```
package frame_canvas;
import java.awt.Canvas;
import java.awt.Color;
import java.awt.Dimension;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.event.FocusEvent;
import java.awt.event.FocusListener;
import java.awt.event.KeyAdapter;
import java.awt.event.KeyEvent;
import java.awt.event.MouseAdapter;
import java.awt.geom.AffineTransform;
import java.awt.image.AffineTransformOp;
import java.awt.image.BufferStrategy;
import java.awt.image.BufferedImage;
import javax.swing.JFrame;
public class Game extends Canvas implements Runnable {
public static JFrame frame;
private Thread thread;
private boolean isRunning = false;
private final int WIDTH = 160;
private final int HEIGHT = 120;
private final int SCALE = 3;
private BufferedImage image;
private Spritesheet sheet;
private BufferedImage[] player;
private int frames = 0;
private int maxFrames = 20;
private int currAnimation = 0, maxAnimation = 2;
private int playerX = 20; // posição inicial do jogador em X
private int playerY = 20; // posição inicial do jogador em Y
private int playerSpeed = 1; // velocidade de movimento do jogador
private int playerDirectionX = 1; // 1 para direita, -1 para esquerda
private int playerDirectionY = 1; // 1 para baixo, -1 para cima
private boolean rightPressed = false;
private boolean leftPressed = false;
private boolean upPressed = false;
private boolean downPressed = false;
public Game() {
sheet = new Spritesheet("/spritesheet.png");
player = new BufferedImage[2];
player[0] = sheet.getSprite(0, 0, 16, 16);
player[1] = sheet.getSprite(16, 0, 16, 16);
this.setPreferredSize(new Dimension(WIDTH*SCALE, HEIGHT*SCALE));
initFrame();
image = new BufferedImage(WIDTH, HEIGHT, BufferedImage.TYPE_INT_RGB);
}
public void initFrame() {
frame = new JFrame();
frame.addKeyListener(new KeyAdapter() {
@Override
public void keyPressed(KeyEvent e) {
Game.this.keyPressed(e);
}
public void keyReleased(KeyEvent e) {
Game.this.keyReleased(e);
}
});
// Adicionando um MouseListener vazio
frame.addMouseListener(new MouseAdapter() {
// Nada será feito nos métodos do MouseListener
});
frame.add(this);
frame.setResizable(false);
frame.pack();
frame.setLocationRelativeTo(null);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setVisible(true);
frame.addFocusListener(new FocusListener() {
@Override
public void focusGained(FocusEvent e) {
System.out.println("A janela está focada.");
}
@Override
public void focusLost(FocusEvent e) {
System.out.println("A janela perdeu o foco.");
}
});
}
public synchronized void start() {
thread = new Thread(this);
isRunning = true;
thread.start();
}
public synchronized void stop() {
isRunning = false;
try {
thread.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public static void main(String args[]) {
Game game = new Game();
game.start();
}
public void tick() {
frames++;
if (frames > maxFrames) {
frames = 0;
currAnimation++;
if (currAnimation >= maxAnimation) {
currAnimation = 0;
}
}
// Atualiza a posição do jogador de acordo com as teclas pressionadas
// e verifica se atingiu os limites da janela
if (leftPressed) {
playerX -= playerSpeed;
if (playerX < -16) {
playerX = WIDTH - player[0].getWidth() + 16;
}
}
if (rightPressed) {
playerX += playerSpeed;
if (playerX > WIDTH - player[0].getWidth() + 16) {
playerX = -16;
}
}
if (upPressed) {
playerY -= playerSpeed;
if (playerY < 0 - 16) {
playerY = HEIGHT - player[0].getHeight() + 16;
}
}
if (downPressed) {
playerY += playerSpeed;
if (playerY > HEIGHT - player[0].getHeight() + 16) {
playerY = 0 - 16;
}
}
// Verifica se o jogador está parado e define a animação de parado
if (!leftPressed && !rightPressed && !upPressed && !downPressed) {
currAnimation = 0; // Define a animação para a sprite de parado
}
}
public void keyPressed(KeyEvent e) {
int key = e.getKeyCode();
if (key == KeyEvent.VK_W || key == KeyEvent.VK_UP) {
// Movimento para cima
upPressed = true;
playerDirectionY = -1; // Define a direção vertical para cima
} else if (key == KeyEvent.VK_S || key == KeyEvent.VK_DOWN) {
// Movimento para baixo
downPressed = true;
playerDirectionY = 1; // Define a direção vertical para baixo
} else if (key == KeyEvent.VK_A || key == KeyEvent.VK_LEFT) {
// Movimento para a esquerda
leftPressed = true;
playerDirectionX = -1; // Define a direção horizontal para esquerda
} else if (key == KeyEvent.VK_D || key == KeyEvent.VK_RIGHT) {
// Movimento para a direita
rightPressed = true;
playerDirectionX = 1; // Define a direção horizontal para direita
}
}
public void keyReleased(KeyEvent e) {
int key = e.getKeyCode();
if (key == KeyEvent.VK_W || key == KeyEvent.VK_UP) {
upPressed = false;
} else if (key == KeyEvent.VK_S || key == KeyEvent.VK_DOWN) {
downPressed = false;
} else if (key == KeyEvent.VK_A || key == KeyEvent.VK_LEFT) {
leftPressed = false;
} else if (key == KeyEvent.VK_D || key == KeyEvent.VK_RIGHT) {
rightPressed = false;
}
}
public void render() {
BufferStrategy bs = this.getBufferStrategy();
if (bs == null) {
this.createBufferStrategy(3);
return;
}
Graphics g = image.getGraphics();
g.setColor(new Color(19, 19, 19));
g.fillRect(0, 0, WIDTH, HEIGHT);
Graphics2D g2 = (Graphics2D) g;
// Verifica se o jogador está se movendo para a esquerda
if (playerDirectionX == -1) { // Esquerda
// Cria uma transformação para inverter horizontalmente
AffineTransform transform = new AffineTransform();
transform.translate(player[currAnimation].getWidth(), 0);
transform.scale(-1, 1);
AffineTransformOp op = new AffineTransformOp(transform, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
// Aplica a transformação no sprite e desenha o sprite invertido horizontalmente
BufferedImage invertedPlayer = op.filter(player[currAnimation], null);
g2.drawImage(invertedPlayer, playerX, playerY, null);
} else { // Direita
g2.drawImage(player[currAnimation], playerX, playerY, null);
}
g.dispose();
g = bs.getDrawGraphics();
g.drawImage(image, 0, 0, WIDTH*SCALE, HEIGHT*SCALE, null);
bs.show();
}
public void run() {
long lastTime = System.nanoTime();
double amountOfTicks = 60.0;
double ns = 1000000000 / amountOfTicks;
double delta = 0;
int frames = 0;
double timer = System.currentTimeMillis();
while(isRunning) {
long now = System.nanoTime();
delta+= (now - lastTime) / ns;
lastTime = now;
if (delta >= 1) {
tick();
render();
frames++;
delta--;
}
if (System.currentTimeMillis() - timer >= 1000) {
System.out.println("FPS:" + frames);
frames = 0;
timer += 1000;
}
}
stop();
}
}
```
I tried to keep playing the game after clicking on it with my mouse, the game just stops when I click on it. |
My game is losing focus when I click on the Canvas |
I had a SES templated email with a line looking line this:
`<h6>Welcome {{usernamee}}</h6>`
My emails were never being sent and it took me forever to see I had a typo in my template.
Looking around, SNS seems to be the one would should tell me when the rendering of the email has failed. I have a SNS with a simple lambda that will tell me when the email has either bounced or been delivery successfully, but no rendering error ever surfaced
Lamda:
```javascript
exports.handler = function(event, context) {
var message = event.Records[0].Sns.Message;
console.log('Message received from SNS:', message);
};
```
Here's the rest of my configuration for my SNS:
```lang-hcl
resource "aws_sns_topic" "ses_topic" {
name = "ses-topic"
}
resource "aws_lambda_function" "ses_sns_notification_sender" {
filename = "lambdas/ses-sns-notification-sender.zip"
function_name = "ses-sns-notification-sender"
role = aws_iam_role.iam_for_lambda.arn
handler = "ses-sns-notification-sender.handler"
runtime = "nodejs14.x"
}
resource "aws_sns_topic_subscription" "ses_sns_notification_sender_subscription" {
topic_arn = aws_sns_topic.ses_topic.arn
protocol = "lambda"
endpoint = aws_lambda_function.ses_sns_notification_sender.arn
}
resource "aws_lambda_permission" "allow_sns" {
statement_id = "AllowExecutionFromSNS"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.ses_sns_notification_sender.function_name
principal = "sns.amazonaws.com"
source_arn = aws_sns_topic.ses_topic.arn
}
resource "aws_ses_identity_notification_topic" "email_notification" {
count = length(var.sns_notification_type) // ["Bounce", "Complaint", "Delivery"]
topic_arn = aws_sns_topic.ses_topic.arn
notification_type = var.sns_notification_type[count.index]
identity = var.ses_identity
include_original_headers = true
}
``` |
SNS failed to warn about a missing variable in a SES Templated email |
|amazon-web-services|amazon-sns|amazon-ses| |
After opening a R-engine. I assigned to it a dataframe, that I get successfully from database.
After printing sone values from the R-engine I found out, that all the string values have a mojibake like following example:
"2024-00010005" -> "ÿþ2024-00010005ÿþ"
```java
if(conn != null) {
try{
if(!conn.isClosed()){
stmt = conn.createStatement();
rs = stmt.executeQuery(querry);
String sampleCode = null;
ArrayList<Double> Test_Dauer_h = new ArrayList<Double>();
while(rs.next()) {
if(i == 0) {
sampleCode = rs.getString("sampleCode");
}
Test_Dauer_h.add(i, Double.parseDouble(rs.getString("Test_Dauer_h")));
System.out.println(sampleCode+";["+Test_Dauer_h.get(i)+"];");
i++;
}
System.out.println("(" + i + " rows readed)");
//init engine & create R objects..
if(i > 0) {
String[] header = {"sampleCode","Test_Dauer_h"};
String[] _sampleCode = new String[i];
double[] _Test_Dauer_h = new double[i];
REXP h = null;
for(int j = 0; j < i; j++) {
_sampleCode[j] = sampleCode;
}
engine = REngine.engineForClass("org.rosuda.REngine.JRI.JRIEngine", new String[] {"-no-save"}, new REngineStdOutput(), false);
h = REXP.createDataFrame(new RList(new REXP[] {new REXPString(_sampleCode),new REXPDouble(_Test_Dauer_h)},header));
engine.assign("df_lab", h);
engine.parseAndEval("print(labSiteCode);print(max(df_lab$Test_Dauer_h)); print(df_lab$sampleCode[1]); print(df_lab$HGS[1])");
engine.parseAndEval("library(randomForestSRC)");
engine.parseAndEval("library(sqldf)");
engine.parseAndEval("library(stringr)");
}
}
}catch (SQLException e){
e.printStackTrace();
JOptionPane.showMessageDialog(null, e.getMessage() + "\n" + querry);
}catch (REXPMismatchException e) {
e.printStackTrace();
JOptionPane.showMessageDialog(null, e.getMessage());
}
}finally{
try{
rs.close();
stmt.close();
if(i > 0)
engine.close();
}catch (SQLException e) {
e.printStackTrace();
JOptionPane.showMessageDialog(null, e.getMessage() +"\n" + querry);
}
SqlServerJdbcUtils.disconnect(conn);
}
}
```
What did I wrong in my program?
[Output][1]
[1]: https://i.stack.imgur.com/T9AAP.png |
Recently we released the first release candidate of a [high-performance JSON masker library in Java with no runtime dependencies](https://github.com/Breus/JSON-masker) which you might want to use to do this.
It works on any JSON value type (with any nesting in both JSON objects and arrays), has an extensive and configurable API, and adds minimal computation overhead both in terms of CPU time and memory allocations. Additionally, it has limited JsonPath support.
In case the masking happens in a flow where performance is critical, or you don't want to have in-house code (depending on the APIs of Jackson) for this, you might want to use this library instead.
The README should contain plenty of examples to show how to use it for your use case. Additionally, all methods part of the API are documented using JavaDoc. |