instruction stringlengths 0 30k β |
|---|
This was down to a misunderstanding on my part. I hadn't realised the power of Vue. It doesn't *only* replace the html element with the specific id, e.g. "app", it also looks inside that element for any other elements that share an id with any of its components, and swaps those. |
{"Voters":[{"Id":1427878,"DisplayName":"CBroe"},{"Id":369434,"DisplayName":"MrWhite"},{"Id":9214357,"DisplayName":"Zephyr"}],"SiteSpecificCloseReasonIds":[11]} |
I am new to Avalonia and also have never used WPF before, so I also don't know how it works there. I would like to display and edit a DataGrid in Avalonia.
Displaying items in the DataGrid works, but it is not editable, meaning that I do not even get the possibility of changing values in the GUI (I cannot, for example, change the state of a checkbox). If I change the DataGrid to an ItemsControl, it becomes editable. What do I need to change to make it editable?
This is my code:
View.xaml:
<StackPanel Orientation="Horizontal">
<!-- not working, is not editable
<DataGrid ItemsSource="{Binding SpectrometerList, Mode=TwoWay}"
GridLinesVisibility="All"
AutoGenerateColumns="True"
BorderThickness="1"
BorderBrush="Gray"
IsReadOnly="False">
</DataGrid>
-->
<!-- Is editable -->
<ItemsControl ItemsSource="{Binding SpectrometerList}">
<ItemsControl.ItemTemplate>
<DataTemplate>
<CheckBox Margin="4"
IsChecked="{Binding X}"
Content="{Binding SerialNumber}"/>
</DataTemplate>
</ItemsControl.ItemTemplate>
</ItemsControl>
<StackPanel Orientation="Vertical">
<Button>1</Button>
<Button>2</Button>
</StackPanel>
</StackPanel>
ViewModel.cs:
public ObservableCollection<Spectrometer> SpectrometerList { get; set; }
public SpectrometerViewModel() {
SpectrometerList = new ObservableCollection<Spectrometer>(Spectrometer.GetSpectrometers());
}
Model.cs:
public class Spectrometer
{
public byte ID { get; set; }
public string SerialNumber { get; set; } = string.Empty;
public byte Reactor { get; set; }
public bool X { get; set; }
public static IEnumerable<Spectrometer> GetSpectrometers()
{
var spec1 = new Spectrometer { ID = 0, SerialNumber = "Test 1", Reactor = 1, X = true };
return new[]
{
spec1,
new Spectrometer { ID = 1, SerialNumber = "Test 2", Reactor = 2, X = false},
new Spectrometer { ID = 2, SerialNumber = "Test 3", Reactor = 3, X = true}
};
}
} |
Editable DataGrid in Avalonia |
|c#|avaloniaui|avalonia| |
I'm trying to plot a map that includes many shapefiles and I'm having trouble with the legend. This is the map as is but I'd like the polygons to also be in the legend.
[map](https://i.stack.imgur.com/dJvp7.png)
This is the code I'm using (as I'm using multiple shapefiles I have no idea how to make this reproducible, sorry about that):
```
ggplot(data = transformed_sea)+
geom_sf(fill="lightblue1")+
geom_sf(data = transformed_locations, size = 2, aes(color = Fjord, fill = Fjord))+
scale_color_brewer(palette = "Dark2")+
geom_sf(data = transformed_places, size = 2, color="black", fill="black")+
geom_sf(data = halseborder, size = 2, color = "tomato", fill = "tan1", alpha = 0.2, show.legend = TRUE)+
geom_sf(data = ff1, size=2, color = "deeppink4", fill = "deeppink", alpha = 0.4, lwd=1, show.legend = TRUE)+
geom_sf(data = ff2, size=2, color = "purple3", fill = "mediumpurple1", alpha = 0.4, lwd=1, show.legend = TRUE)+
geom_sf(data = vad, color = "tomato", fill = "tan1", alpha = 0.2)+
geom_sf_text(data = transformed_labels, aes(label = ID), size = 3, color="grey34") +
geom_sf_label_repel(inherit.aes = FALSE, data=transformed_places, aes(label = ID),force = 10, nudge_x = 2.5, seed = 1, size = 3) +
coord_sf(xlim = c(636224.5703011239, 674153.32093384),ylim = c(6421546.462142282, 6472000.544186506))+
xlab("Longitude")+ ylab("Latitude")+
scale_x_continuous(breaks = c(11.3, 11.6, 11.9),
labels = c("11.3Β°E", "11.6Β°E", "11.9Β°E")) +
annotation_scale(location = "bl", width_hint = 0.3)+
theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(),panel.background = element_rect(fill = "papayawhip"))
```
As you can see I've added the argument `show.legend=TRUE` in the lines mapping the polygons but it just causes it to overlap the fjord legend like so:
[map with bad legend](https://i.stack.imgur.com/fcvAJ.png)
Does any one know of a way to have separated legends, with one showing the fjord points and the other the different zones? |
I think this happens due to the following:
[`AddAzureClientsCore`](https://github.com/Azure/azure-sdk-for-net/blob/10e43c2575616ec9e774f623bf1f91b195fe52d6/sdk/extensions/Microsoft.Extensions.Azure/src/AzureClientServiceCollectionExtensions.cs#L65) which is invoked by `AddAzureClients` tries to register `NullLoggerFactory` as `ILoggerFactory`:
```
collection.TryAddSingleton<ILoggerFactory, NullLoggerFactory>();
```
While `AddLogging` will [try to add add `LoggerFactory`][1] as one:
```
services.TryAdd(ServiceDescriptor.Singleton<ILoggerFactory, LoggerFactory>());
```
`TryAdd{...}` methods will add the registration only if the service type hasn't already been registered so the first one called wins (unlike with the `Add{...}` methods which will result in the last one wins).
[1]: https://github.com/dotnet/runtime/blob/9d56a4c11853b3b5bb1eca5f1102a351d90607bf/src/libraries/Microsoft.Extensions.Logging/src/LoggingServiceCollectionExtensions.cs#L38 |
I am encountering a 'System.Text.Json.JsonException' with the message "'O' is an invalid start of a value" while attempting to deserialize MongoDB BSON documents using the ASP.NET Core MongoDB driver. The issue arises when using JsonSerializer.DeserializeAsync method. I have verified the JSON structure and MongoDB documents but am still facing this problem. Any insights or suggestions on resolving this issue would be greatly appreciated.
β
ps: I'm currently saving JSON documents with a dynamic schema , that's why i followed that approach
[enter image description here][1]
```
[HttpGet("{id}")]
public async Task<ActionResult<ProjectIdentification>> GetProjectById(string id)
{
var project = await _projectService.GetProjectById(id);
if (project == null)
{
return NotFound();
}
return Ok(project.ToJson());
}
```
```
public class ProjectIdentification
{
[BsonId]
[BsonRepresentation(BsonType.ObjectId)]
public string? Id { get; set; } = ObjectId.GenerateNewId().ToString();
public string? version { get; set; }
public Object[] content { get; set; }
public Object[] columns { get; set; }
public Theme theme { get; set; }
}
```
```
public async Task AddProject(ProjectIdentification project)
{
string serializedJson = JsonSerializer.Serialize(project, new JsonSerializerOptions
{
DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull
});
BsonDocument bsonDocument = BsonDocument.Parse(serializedJson);
await _collection.InsertOneAsync(bsonDocument);
}
public async Task<ProjectIdentification> GetProjectById(string id)
{
var filter = Builders<BsonDocument>.Filter.Eq("_id", ObjectId.Parse(id));
var result = await _collection.Find(filter).FirstOrDefaultAsync();
Console.Write(result);
if (result != null)
{
return JsonSerializer.Deserialize<ProjectIdentification>(result.ToJson());
}
return null;
}
```
I attempted to deserialize MongoDB BSON documents
[1]: https://i.stack.imgur.com/hoq9w.png |
You need to install azure-devops package via command `pip install azure-devops`.
Then use python script to get the info needed. My python script below:
```
from azure.devops.connection import Connection
from msrest.authentication import BasicAuthentication
from azure.devops.v7_0.git.git_client import GitClient
from azure.devops.v7_0.git.models import GitPullRequestSearchCriteria
# Replace with your Azure DevOps organization URL and PAT
organization_url = "https://dev.azure.com/{orgname}"
personal_access_token = "PAT"
# Create a connection
credentials = BasicAuthentication("", personal_access_token)
connection = Connection(base_url=organization_url, creds=credentials)
# Get the Git client
git_client = connection.clients.get_git_client()
# Replace with your repository URL
repo_url = "https://{orgname}@dev.azure.com/{orgname}/{projectname}/_git/{reponame}"
# Get all repositories
repositories = git_client.get_repositories(project=None, include_links=None, include_all_urls=None, include_hidden=None)
for repo in repositories:
print(f"testreporemoteurl .......{repo.remote_url}") # used to check the remote url.
for repo in repositories:
if repo.remote_url == repo_url:
pull_requests = git_client.get_pull_requests(repository_id=repo.id, search_criteria=GitPullRequestSearchCriteria(status='Completed'))
for pr in pull_requests:
print(f"Pull Request #{pr.pull_request_id}: {pr.title}")
print(f"Source Branch: {pr.source_ref_name}")
print(f"Target Branch: {pr.target_ref_name}")
print(f" - {pr}") # check the whole PR content
print("Commits:")
print(f" - {pr.last_merge_source_commit.commit_id}") # get the last merget source commit id
print("\n")
```
Please replace `orgname`, `pat`, `projectname`, `reponame` to yours.
The output as below. I output the PR ID, title, source branch, target branch. I also output the whole {pr} content. For commits, it's none. Hence, i output the last merget source commit id. You can parse the {pr} content to get the info you need.
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/G4EtE.png |
For a dataframe like this:
```
df = pd.DataFrame({"CARRIER": [1, 1, 2, 3, 3, 3]})
```
```
CARRIER
1 2
2 1
3 3
```
Use `groupby` and `value_counts`, then reset the index:
```
df = df.groupby("CARRIER").value_counts().reset_index()
```
```
CARRIER count
0 1 2
1 2 1
2 3 3
``` |
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from smplx import SMPL
from einops import rearrange
from models.loss import Loss
from transformers import CLIPProcessor, CLIPModel
from utils.utils import get_keypoints
from models.module import MusicEncoderLayer, MotionDecoderLayer
import math
class GPT(nn.Module):
def __init__(self, p=2, \
input_size=438, embed_size=512, num_layers=6, heads=8, forward_expansion=4, dropout=0.1, output_size=75):
super(GPT, self).__init__()
max_len, max_per = 450, 6
self.motion_pos_emb_t = nn.Parameter(torch.zeros(max_len, embed_size))
# self.motion_pos_emb_p = nn.Parameter(torch.zeros(max_per, embed_size))
# self.music_pose_emb_t = nn.Parameter(torch.zeros(max_len, embed_size))
self.music_emb = nn.Linear(input_size, embed_size)
self.motion_emb = nn.Linear(output_size, embed_size)
self.text_encoder = TextEncoder()
self.music_encoder = MusicEncoder(embed_size, num_layers, heads, forward_expansion, dropout)
self.motion_decoder = MotionDecoder(embed_size, num_layers, heads, forward_expansion, dropout, output_size)
# self.mask = generate_square_subsequent_mask(max_len, 'cuda')
# self.mask = self.mask.masked_fill(self.mask==0, float('-inf')).masked_fill(self.mask==1, float(0.0))
self.loss = nn.MSELoss()
# self.loss = Loss()
def forward(self, text, music, motion):
motion_src, motion_trg = motion[:, :, :-1, :], motion[:, :, 1:, :]
b, p, t, _ = motion_src.shape
text_encode = self.text_encoder(text)
music_encode = self.music_encoder(self.music_emb(music[:, :-1, :]))\
.reshape(b, 1, t, -1).repeat(1, p, 1, 1).reshape(b*p, t, -1)
mask = torch.nn.Transformer().generate_square_subsequent_mask(t).transpose(0, 1).cuda()
motion_emb = self.motion_emb(motion_src) + self.motion_pos_emb_t[:t, :].reshape(1, 1, t, -1).repeat(b, p, 1, 1)
motion_pred = self.motion_decoder(motion_emb, music_encode, mask=mask).reshape(b, p, t, -1)
loss = self.loss(motion_pred, motion_trg)
return motion_pred, loss
def inference(self, text, music, motion):
self.eval()
with torch.no_grad():
music, motion = music[:, :-1, :], motion[:, :, :-1, :]
b, p, t, c = motion.shape
music_encode = self.music_encoder(self.music_emb(music))\
.reshape(b, 1, t, -1).repeat(1, p, 1, 1).reshape(b*p, t, -1)
preds = torch.zeros(b, p, t, c).cuda()
preds[:, :, 0, :] = motion[:, :, 0, :]
for i in range(1, t):
current_mask = torch.nn.Transformer().generate_square_subsequent_mask(i).transpose(0, 1).cuda()
motion_emb = self.motion_emb(preds[:, :, :i, :]) + self.motion_pos_emb_t[:i, :].reshape(1, 1, i, -1).repeat(b, p, 1, 1)
current_pred = self.motion_decoder(motion_emb, music_encode, mask=current_mask).reshape(b, p, i, -1)
preds[:, :, i, :] += current_pred[:, :, i-1, :]
motion_pred = preds.reshape(b, p, t, -1)
print(motion_pred[0, 0, :10, :6])
import sys
sys.exit()
pred_keypoints = get_keypoints(motion_pred)
return {'keypoints': pred_keypoints, 'smpl': motion_pred}
class MusicEncoder(nn.Module):
def __init__(self, embed_size, num_layers, heads, forward_expansion, dropout):
super(MusicEncoder, self).__init__()
self.layers = nn.ModuleList(
[nn.TransformerEncoderLayer(d_model=embed_size, nhead=heads, dim_feedforward=embed_size*forward_expansion, \
dropout=dropout, batch_first=True) for _ in range(num_layers)]
)
def forward(self, x):
b, t, _ = x.shape
out = x
for layer in self.layers:
out = layer(out)
return out
class MotionDecoder(nn.Module):
def __init__(self, embed_size, num_layers, heads, forward_expansion, dropout, output_size):
super(MotionDecoder, self).__init__()
self.num_layers = num_layers
self.fc_out = nn.Linear(embed_size, output_size)
self.layers = nn.ModuleList(
[nn.TransformerDecoderLayer(d_model=embed_size, nhead=heads, dim_feedforward=embed_size*forward_expansion, \
dropout=dropout, batch_first=True) for _ in range(num_layers)]
)
def forward(self, motion_src, music_text_encode, mask=None):
b, p, t, _ = motion_src.shape
out = motion_src.reshape(b*p, t, -1)
for layer in self.layers:
out = layer(out, music_text_encode, tgt_mask=mask)
return self.fc_out(out)
```
[](https://i.stack.imgur.com/wIXUO.png)
I tend to finish a Music2Dance Task, Dance is the SMPL-Data, Music is a 439-dimension feature, and I have aligned their FPS.
The training loss is decrease, but the inferecne result is absolutely wrong, the frames after the second's is the same.
above is the error log and my code, please help me to find out the mistakes!
Thanks! |
nn.TransformerDecoder output the same result from the second frames |
|python|pytorch|transformer-model|encoder-decoder| |
null |
I have about 8 **end-to-end-test** classes that **extend** my **abstract** SpringContextLoadingTest class, which looks like this:
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.DEFINED_PORT)
public abstract class SpringContextLoadingTest extends AbstractTestNGSpringContextTests {
}
I have main Application class with the **@SpringBootApplication** annotation.
As I use TestNG, I have some classes in one group ("channel A") and some in the other ("channel B").
I made gradle tasks for running separate groups:
task runChannelA(type: Test) {
forkEvery = 1
useTestNG() {
includeGroups "channel A"
}
}
Without "forkEvery = 1", there is a problem with busy ports when running more than 1 tests.
Thanks to this simple config below, I receive much more verbose output from gradle task execution:
tasks.withType(Test) {
testLogging.showStandardStreams = true
}
Without it, it would looked like after the tests are executed, application hangs for 2 minutes at closing the EntityManagerFactory, but this flag revealed that gradle picked up tests it wasn't asked to. For every test, no matter in which group it is in, gradle is logging:
Gradle Test Executor 22 STANDARD_OUT
2016-12-21 17:10:00.115 INFO --- [ Test worker] .b.t.c.SpringBootTestContextBootstrapper : Neither @ContextConfiguration nor @ContextHierarchy found for test class [mypackage.OtherTest], using SpringBootContextLoader
2016-12-21 17:10:00.141 INFO --- [ Test worker] o.s.t.c.support.AbstractContextLoader : Could not detect default resource locations for test class [mypackage.OtherTest]: no resource found for suffixes {-context.xml, Context.groovy}.
2016-12-21 17:10:00.143 INFO --- [ Test worker] t.c.s.AnnotationConfigContextLoaderUtils : Could not detect default configuration classes for test class [mypackage.OtherTest]: DbCongestionExploratoryTest does not declare any static, non-private, non-final, nested classes annotated with @Configuration.
2016-12-21 17:10:00.455 INFO --- [ Test worker] .b.t.c.SpringBootTestContextBootstrapper : Found @SpringBootConfiguration mypackage.Application for test class mypackage.OtherTest
2016-12-21 17:10:00.466 INFO --- [ Test worker] .b.t.c.SpringBootTestContextBootstrapper : Using TestExecutionListeners: [org.springframework.test.context.web.ServletTestExecutionListener@9404cc4, org.springframework.test.context.support.DirtiesContextBeforeModesTestExecutionListener@46876feb, org.springframework.test.context.support.DependencyInjectionTestExecutionListener@dd46df5, org.springframework.test.context.support.DirtiesContextTestExecutionListener@49e2c374]
And it takes so much time because I have a lot of other tests. This is happening after I can see in `IntelliJ` that tests that I wanted to execute have passed. For example, I see after 25 seconds that the tests have passed, but because it is doing whatever the hell it is doing with every other test set up this way in my project, runChannelA takes more than 3 minutes. Funny thing is I can just stop the process during this strange behaviour, and the progress bar in IntelliJ just fills up to the end, and it is as nothing was going on, everything green and great.
Can someone help me with this ? |
How to implement a generic React component with react-hook-form and Typescript support which displays all errors for certain object? |
|reactjs|typescript|react-hook-form| |
null |
i am using sox for creating synth with 10ms, this is my command:
```
/usr/bin/sox -V -r 44100 -n -b 64 -c 1 file.wav synth 0.1 sine 200 vol -2.0dB
```
now when i create 3 sine wave files and i combine all with
```
/usr/bin/sox file1.wav file2.wav file3.wav final.wav
```
then i get gaps between the files. i dont know why. but when i open for example file1.wav then i also see a short gap in front and at the end of the file.
how can i create a sine with exact 0.1seconds without gaps in front and end?
and my 2nd question: is there also a possibility to create e.g. 10 synths sine wave with one command in sox? like sox f1 200 0.1, f2 210 01, f3 220 01, ... first 200hz 10ms, 210hz 10ms, 220hz 10ms
thank you so much many greets
i have tried some different options in sox but always each single sine file looks like that:
|
create sine wave with sox without a gap |
|linux|sox| |
null |
I want to add object "newTask" to a list of builtTask objects called builtTasks.
I get error NullReferenceException pointing to builtTasks.Add(newTask). Debug.Logs returns the right values but it seems as if the code doesn't know that builtTasks actually exists. Please don't flame me I'm new to unity.
The class task has two values, a string and a string array. The class items has two values a string and a string array.
both arrays items and tasks were filled in unity itself. Please help me I don't get it
```
public item[] items;
public task[] tasks;
private static List<builtTask> builtTasks;
private static List<builtTask> runningTasks;
private void Start()
{
if (builtTasks == null)
{
for (int i = 0; i < tasks.Length; i++)
{
string championName = tasks[i].championName;
string itemName;
List<item> items1 = new List<item> {};
for (int j = 0; j < tasks[i].itemBuild.Length; j++)
{
itemName = tasks[i].itemBuild[j];
for( int k = 0; k < items.Length; k++)
{
if (itemName == items[k].itemName)
{
items1.Add(items[k]);
Debug.Log(items[k].itemName);
}
}
}
builtTask newTask = new builtTask(championName, items1.ToArray());
Debug.Log(newTask.championName);
Debug.Log(newTask.itemBuild[0].itemName);
Debug.Log(newTask.itemBuild[1].itemName);
Debug.Log(newTask.itemBuild[2].itemName);
builtTasks.Add(newTask);
}
}
}
```
I tried fetching an object of class item from items array using a string tasks[i].itemBuild[j] and it works the Debug.Log returns right values of the constructed object newTask. But somehow I can't fill the object into builtTasks. I get NullReferenceException. |
unity C# null reference exception when trying to add object to list |
As @rici said, but in other words [1] (p.179)"any derivation has an equivalent leftmost and an equivalent rightmost derivation."
Furthermore, @rici wrote
> This has nothing to do with which way the parse tree leans. The same parse tree is produced regardless. (Or the same parse trees, if the grammar is ambiguous.)
Interesting, your grammar is ambiguous, as can be seen for string acccc:
[![enter image description here][1]][1]
[1]: Hopcroft et al. Introduction to Automata Theory, Languages, and Computation, Addison-Wesley, 3rd ed.
[1]: https://i.stack.imgur.com/KNoPV.png |
I had the same behaviour with a nanotec drive unit. I could update the object dictionary with curl:
```
curl -X POST http://192.168.2.21/od/3701/00 -d '"0002"'
```
but not with the python requests library.
I overcame this by writing the requests manually using the socket library. The code for GET and POST requests are below.
```
def POST(host, target, value):
'''
host = base url or ip.
target = everything after base url
value = content of body
For example POST('192.168.2.2', '/temp/od', '"0006"')
would send a POST request to 192.168.2.2/temp/od with a
body containing the value "0006" (including the speech marks).
'''
post_request = f"""POST {target} HTTP/1.0\r\nHost: {host}\r\nContent-Type: application/x-www-form-urlencoded\r\nContent-Length: {len(value)}\r\n\r\n{value}"""
# Create a socket object
# The two input parameters are the defaults.
# They specify the type of address and socket.
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Connect to the server
s.connect((host, 80)) # Assuming the server is listening on port 80
print(post_request)
print("---"*5)
# Send the POST request
s.send(post_request.encode())
# Receive the response
response = s.recv(4096)
# Print the response
print(response.decode())
# Close the socket
s.close()
def GET(host, target):
'''
host = base url or ip.
target = everything after base url
'''
# Prepare the HTTP GET request
get_request = f"GET {target} HTTP/1.0\r\nHost: {host}\r\n\r\n"
# Create a socket object
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Connect to the drive unit
s.connect((host, 80)) # Assuming the server is listening on port 80
# Send the GET request
s.send(get_request.encode())
# Receive the response, 4096 is just recommended by docs
response = s.recv(4096)
# Print the response
decoded = response.decode()
print(decoded)
# Close the socket
s.close()
# Return the body content
return decoded.split('\r\n\r\n')[-1]
```
With these functions, I was able to update the object dictionary with POST function:
```python
host = '192.168.2.21'
target = '/od/3701/00'
POST(host, target, '"0002"')
```
I could read the object dictionary with the GET function:
```python
response = GET(host, target)
```
|
I'm trying to plot a map that includes many shapefiles and I'm having trouble with the legend. This is the map as is but I'd like the polygons to also be in the legend.
[map](https://i.stack.imgur.com/dJvp7.png)
This is the code I'm using (as I'm using multiple shapefiles I have no idea how to make this reproducible, sorry about that):
```
ggplot(data = transformed_sea)+
geom_sf(fill="lightblue1")+
geom_sf(data = transformed_locations, size = 2, aes(color = Fjord, fill = Fjord))+
scale_color_brewer(palette = "Dark2")+
geom_sf(data = transformed_places, size = 2, color="black", fill="black")+
geom_sf(data = halseborder, size = 2, color = "tomato", fill = "tan1", alpha = 0.2, show.legend = TRUE)+
geom_sf(data = ff1, size=2, color = "deeppink4", fill = "deeppink", alpha = 0.4, lwd=1, show.legend = TRUE)+
geom_sf(data = ff2, size=2, color = "purple3", fill = "mediumpurple1", alpha = 0.4, lwd=1, show.legend = TRUE)+
geom_sf(data = vad, color = "tomato", fill = "tan1", alpha = 0.2)+
geom_sf_text(data = transformed_labels, aes(label = ID), size = 3, color="grey34") +
geom_sf_label_repel(inherit.aes = FALSE, data=transformed_places, aes(label = ID),force = 10, nudge_x = 2.5, seed = 1, size = 3) +
coord_sf(xlim = c(636224.5703011239, 674153.32093384),ylim = c(6421546.462142282, 6472000.544186506))+
xlab("Longitude")+ ylab("Latitude")+
scale_x_continuous(breaks = c(11.3, 11.6, 11.9),
labels = c("11.3Β°E", "11.6Β°E", "11.9Β°E")) +
annotation_scale(location = "bl", width_hint = 0.3)+
theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(),panel.background = element_rect(fill = "papayawhip"))
```
As you can see I've added the argument `show.legend=TRUE` in the lines mapping the polygons but it just causes it to overlap the fjord legend like so:
[map with bad legend](https://i.stack.imgur.com/fcvAJ.png)
Does any one know of a way to have separated legends, with one showing the fjord points and the other the different zones (halseborder,ff1 and ff2)? |
null |
The reason you see the loading image taking time to appear is simply because it needs to load.
The image is not that big, roughly 200Ko but the service providing the image is slow. On my office fiber, it takes one second to load. I suggest you try to compress it a bit more using webp, this could save you some. I'd also serve it from the same domain as your website, instead of a different one. Reason is that currently a connection needs to be open just for this asset, while it could use an already open connection that is used for other assets.
Also, it seems that the other images in the body are loading before your loader image. This again slows down when the loader starts downloading. I suggest you leverage the lazy loading mechanics as much as possible: https://developer.mozilla.org/en-US/docs/Web/Performance/Lazy_loading
Also, if you wanted to make sure the image is loaded ultra fast, you can incorporate its data directly in the html. You can get a Data URL that would be put in the src attribute, that URL contains all the image's details directly so it won't need to download. Actually, it WILL have to download but it will be in the HTML so the HTML size will be way bigger, longer to download but the image will be ready to display. See this image to get the data URL of the image in your developer drawer:
[![enter image description here][1]][1]
IMO, you are attacking it by the wrong angle. You are trying it add a loading animation to make the user wait while the website loads, but the website loads so fast that the loading animation is the one slowing it down. I'd really suggest removing it and optimizing the first draw instead.
[1]: https://i.stack.imgur.com/Mpybq.png |
I have a tests case which uses react-dom 16.8
`const mockDOM = ReactTestUtils.renderIntoDocument(Component()` returns null. But in react-dom 16.7 it is working fine. Any fixes for that?
Seems like the method implementation has changed |
React renderIntoDocument() returns null |
|reactjs|jestjs|reactjs-testutils| |
null |
I think this question is not made for a pyspark solution as you have not a partitioned folder structure rather it is more a general python question. You are looking for the max in the folders on a databricks filesystem ```dbfs```.
As the first answer says you can use the hadoop filesystem api.
Another way that I prefer more as it is more flexible in handling local paths and dbfs paths is to change the dbfs path to local path. Imagine you are working locally and in databricks. The paths on how you gonna access the data are different. But you want to test locally with your IDE first the above solution might not work until you gonna setup hive on your local machine.
What you can do is to replace first the dbfs path into a normal file system path and working with the python libraries.
```
def get_local_path(path: Union[str, Path]) -> str:
"""
Transforms a potential dbfs path to a
path accessible by standard file system operations
:param path: Path to transform to local path
:return: The local path
"""
return str(path).replace("dbfs:", "/dbfs")
dbfs_path = "dbfs:/path_to/xyz"
local_path = get_local_path(dbfs_path)
```
The above function replaces i.e. ```dbfs:/path_to/xyz``` by ```/dbfs/path_to/xyz```.
From now on you can use the python ```os``` or ```pathlib``` functionalities.
```
def find_max_in_folders(path) -> str:
"""
Finds the maximum folder number.
:param path: The path of the folders.
:return: The max number in the folders.
"""
dirs = [int(dir_name) for dir_name in os.listdir(path) if os.path.is_dir(os.path.join(path, dir_name))]
the_max = max(dirs)
return the_max
max_month = find_max_in_folders(local_path)
month_path = os.path.join(local_path, f"month/{max_month}")
max_day = find_max_in_folders(month_path)
print(f"max_month: {max_month}"
print(f"max_day: {max_day}"
```
Also if the data would be partitioned correctly like ```month=...``` and ```day=...```then it is also not a good idea to load the data by loading from the root path. If forces spark to read and scan first all underlying data. If your folders are getting bigger and the data under the hood is also big then this would be the worst idea dealing with the above problem.
In order someone jumps over this answer and looking for a partition pruning solution it is better to go and scan the filesystem on your own (like the above answer) if you are not working with watermarks and you are only looking for the latest data dumps.
|
I use an excel spreadsheet with an embedded function to return hyperlinked cells. When a cell isn't hyperlinked, the sheet returns " #VALUE!", and I'd like it to return a null (""). How do I update the function to return the link, or return a ""...?
Here's the function, and it works great - I just want to add "if URL exists, return it, if not a URL, then "" :
```
Function URL(Hyperlink As Range)
URL = Hyperlink.Hyperlinks(1).Address
End Function
``` |
I use a VB function to extract a hyperlink in Excel, need to return either the Link, or "" if it's not a hyperlink |
|excel|vb.net|hyperlink| |
null |
@Jonathan Nolan's code is good for setting values for numeric filters. Here is a version for a date filter.
library(crosstalk)
library(htmltools)
# dummy data
dat <- data.frame(
date = Sys.Date() + 0:4,
letter = letters[1:5],
number = 0:4 + 1
)
# shared data
dat_shared <- crosstalk::SharedData$new(dat)
# custom js
custom_js <- shiny::tags$script(paste0('
$(document).ready(function() {
var sliderInstance = $("#name_of_slider input").data("ionRangeSlider");
sliderInstance.update({
from: ', as.numeric(as.POSIXct( Sys.Date() + 1 )) * 1000, ',
to: ', as.numeric(as.POSIXct( Sys.Date() + 2 )) * 1000, '
});
});
'))
filter_date <- filter_slider("name_of_slider","Slider label", dat_shared, ~date)
htmltools::browsable(htmltools::tagList(
custom_js, filter_date
)) |
I would like the "left-side" of my project to appear above the left side of my project when the screen is small. The issue that I am running into is that as the screen shrinks the left side will become slowly covered by the right side until it briefly, in a very distorted way, jumps to the right side of the screen and then disappears completely. I am guessing that it becomes covered by the right side of the screen, although to be honest I have no idea why the left side disappears.
I am using media tags to specify the breakpoints for my website, though I will likely change the current breakpoints depending on what the screen looks like when it is responding correctly.
```
@media (max-width: 600px) {
.left-side {
flex-direction: column; /* Change to column layout on smaller screens */
}
.right-side {
width: 100%; /* Occupy full width in column layout */
order: -1; /* Move right-side below left-side in column layout */
margin-top: 20px; /* Add some space between left and right sides */
}
}
```
I am not sure why this results in the left side disappearing. I will attach the rest of my templates and style below, for reference.
```
<template>
<div class="content-wrapper">
<div class="row">
<div class="left-side col-sm-4">
<div class="tele-panel">
<h1 class="header-title Display-4">TeleTrabajo</h1>
<div class="callout calendar-day">
<div class="grid-x">
<div class="shrink cell">
<h3 class="text-primary margin-left">{{ this.momentInstance.format('DD') }}
<h3 class="text-primary text d-inline"></h3>
</h3>
</div>
<div class="auto cell">
<h3>{{ this.momentInstance.format('[de] MMMM [de] ') }}
<h3 class="text-primary text d-inline"> {{ this.momentInstance.format('YYYY') }}
</h3>
</h3>
<h3>{{ this.momentInstance.format('dddd ') }}
<h3 class="text-primary text d-inline"> {{ this.momentInstance.format('HH:mm:ss') }}
</h3>
</h3>
</div>
</div>
</div>
<img loading="lazy"
srcSet="https:..."
class="logo" />
</div>
</div>
<div class="divider-line"></div>
<div class="right-side col-sm-8">
<div class="timbres mt-3 mb-3">
<app-button :label="$t('Ingreso')" class-name="btn-primary" :is-disabled="false"
@submit="btnSubmit" />
<app-button :label="$t('Almuerzo')" class-name="btn-primary" :is-disabled="false"
@submit="btnSubmit" />
<app-button :label="$t('Regreso')" class-name="btn-primary" :is-disabled="false"
@submit="btnSubmit" />
<app-button :label="$t('Salido')" class-name="btn-primary" :is-disabled="false"
@submit="btnSubmit" />
<div class="search d-flex justify-content-end">
<div class="form-group form-group-with-search d-flex">
<span :key="'search'" class="form-control-feedback">
<app-icon name="search" />
</span>
<input type="text" class="form-control" v-model="searchValue" :placeholder="$t('search')"
@keydown.enter.prevent="getSearchValue" />
</div>
</div>
</div>
<app-table :id="'biometricos-table'" :options="options" />
</div>
<!-- <div class="buttons-section col-sm-6">
<div class="horas-square ">horas</div>
</div> -->
</div>
</div>
</template>
<style>
html,
body {
margin: 0;
padding: 0;
}
.left-side {
display: flex;
}
@media (max-width: 600px) {
.left-side {
flex-direction: column; /* Change to column layout on smaller screens */
}
.right-side {
width: 100%; /* Occupy full width in column layout */
order: -1; /* Move right-side below left-side in column layout */
margin-top: 20px; /* Add some space between left and right sides */
}
}
.tele-panel {
display: flex;
flex-direction: column;
position: fixed;
}
.header-title {
font-family: Poppins, sans-serif;
color: #555555;
text-align: center;
}
.callout.calendar-day {
padding: .8rem 1.9rem;
margin-top: 10vh;
text-align: right;
}
.callout.calendar-day h1 {
margin: 0 1rem 0 0;
}
.callout.calendar-day h6 {
margin: 0;
}
.callout.calendar-day h1.light {
color: #555555;
}
.logo {
aspect-ratio: 1.85;
margin-left: -20px;
width: 200px;
position: fixed;
bottom: 0;
}
@media (max-width: 600px) {
.logo {
display: none
}
}
.divider-line {
border-left: .1px solid rgba(0, 0, 0, 0.25);
height: 100%;
position: absolute;
left: 45%;
top: 0;
bottom: 0;
}
@media (max-width: 1300px) {
.divider-line {
display: none;
}
}
.timbres {
display: flex;
flex-direction: row;
justify-content: space-between;
align-items: center;
}
.timbres .btn-primary {
margin-right: 20px;
flex: 1;
}
</style>
``` |
How to orient the right column section underneath the left column section when the screen size is shrunk (responsive design)? |
|html|css|responsive-design|vuejs3|bootstrap-5| |
null |
The reason for being unable to send a JSON parameter to the mail is that the JSON parameters are not added to the mail body. The sample code below sends an email with the order details in JSON format.
```c#
using System;
using System.Threading.Tasks;
using Azure;
using Azure.Communication.Email;
using System.Text.Json;
namespace SendEmail
{
internal class Program
{
static async Task Main(string[] args)
{
var sender = "DoNotReply@627f8.azurecomm.net";
var recipient = "sampath80@gmail.com";
var order = new Order
{
OrderId = "12345",
CustomerName = "David",
Items = new[]
{
new Item { Name = "product one", Price = "1.99" },
new Item { Name = "product two", Price = "2.99" },
new Item { Name = "product three", Price = "3.99" }
}
};
var jsonContent = JsonSerializer.Serialize(order.Items);
var subject = "Your Order";
try
{
string connectionString = "endpoint=https://sampath8.unitedstates.communication.azure.com/;accesskey=SharedAccessKey ";
EmailClient emailClient = new EmailClient(connectionString);
Console.WriteLine("Sending email...");
EmailSendOperation emailSendOperation = await emailClient.SendAsync(
Azure.WaitUntil.Completed,
sender,
recipient,
subject,
jsonContent);
EmailSendResult statusMonitor = emailSendOperation.Value;
Console.WriteLine($"Email Sent. Status = {emailSendOperation.Value.Status}");
string operationId = emailSendOperation.Id;
Console.WriteLine($"Email operation id = {operationId}");
}
catch (RequestFailedException ex)
{
Console.WriteLine($"Email send operation failed with error code: {ex.ErrorCode}, message: {ex.Message}");
}
}
}
public class Order
{
public string? OrderId { get; set; }
public string? CustomerName { get; set; }
public Item[] Items { get; set; }
}
public class Item
{
public string? Name { get; set; }
public string? Price { get; set; }
}
```
**Output:**


The code below sends an email containing an order summary and does so asynchronously.
```c#
var sender = "DoNotReply@627f8.azurecomm.net";
var recipient = "sampath80@gmail.com";
var order = new Order
{
OrderId = "12345",
CustomerName = "David",
Items = new[]
{
new Item { Name = "product one", Price = "1.99" },
new Item { Name = "product two", Price = "2.99" },
new Item { Name = "product three", Price = "3.99" }
}
};
var htmlContent = RenderOrderHtml(order);
var subject = "Your Order";
try
{
string connectionString = "endpoint=https://sampath8.unitedstates.communication.azure.com/;accesskey=SharedAccessKey ";
EmailClient emailClient = new EmailClient(connectionString);
Console.WriteLine("Sending email...");
EmailSendOperation emailSendOperation = await emailClient.SendAsync(
Azure.WaitUntil.Completed,
sender,
recipient,
subject,
htmlContent);
EmailSendResult statusMonitor = emailSendOperation.Value;
Console.WriteLine($"Email Sent. Status = {emailSendOperation.Value.Status}");
string operationId = emailSendOperation.Id;
Console.WriteLine($"Email operation id = {operationId}");
}
catch (RequestFailedException ex)
{
Console.WriteLine($"Email send operation failed with error code: {ex.ErrorCode}, message: {ex.Message}");
}
}
static string RenderOrderHtml(Order order)
{
return $@"
<html>
<head>
<title>Your Order</title>
</head>
<body>
<div>
<h1>Order: {order.OrderId}</h1>
<div>
<p>Hi {order.CustomerName},</p>
<p>Here is your order:</p>
<table>
<tr>
<th>Item Name</th>
<th>Price</th>
</tr>
{string.Join("", order.Items.Select(item => $@"
<tr>
<td>{item.Name}</td>
<td>{item.Price}</td>
</tr>"))}
</table>
</div>
</div>
</body>
</html>";
```
**Output:**

**Updated:**
The code sample below encodes the metadata into the email subject or body.
```c#
var sender = "DoNotReply@627f8.azurecomm.net";
var recipient = "sampath80@gmail.com";
var order = new Order
{
OrderId = "12345",
CustomerName = "David",
Items = new[]
{
new Item { Name = "product one", Price = "1.99" },
new Item { Name = "product two", Price = "2.99" },
new Item { Name = "product three", Price = "3.99" }
}
};
var jsonContent = JsonSerializer.Serialize(order);
var subject = "Your Order";
var metadata = jsonContent;
try
{
string connectionString = "endpoint=https://sampath8.unitedstates.communication.azure.com/;accesskey=SharedAccessKey ";
EmailClient emailClient = new EmailClient(connectionString);
Console.WriteLine("Sending email...");
// Add metadata to the subject
subject += $" | Metadata: {metadata}";
EmailSendOperation emailSendOperation = await emailClient.SendAsync(
Azure.WaitUntil.Completed,
sender,
recipient,
subject,
htmlContent: "<p>Your email content here</p>",
plainTextContent: "Your email content here");
EmailSendResult statusMonitor = emailSendOperation.Value;
Console.WriteLine($"Email Sent. Status = {emailSendOperation.Value.Status}");
string operationId = emailSendOperation.Id;
Console.WriteLine($"Email operation id = {operationId}");
}
catch (RequestFailedException ex)
{
Console.WriteLine($"Email send operation failed with error code: {ex.ErrorCode}, message: {ex.Message}");
}
}
```
**Output:**
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/0xCgZ.png |
null |
|excel|vba|hyperlink| |
My problem is that there is a generic class called `TypedProducer<K, V>` for key and value and I do have to instantiate it from configuration (let's say someone tells me that key is `Integer` and value is `String` in configuration during runtime but it can be any `Object` subtype really) so I don't know the types beforehand.
How can I instantiate `TypedProducer` by passing parameters? Or even create `SourceTypedClassProvider<keyType, valueType>` in the first place from `.class` objects?
public class SourceTypedClassProvider {
public static TypedProducer<?, ?> instantiateTypedProducer(
Class<?> keyType, Class<?> valueType) {
//should return instance of TypedProducer<keyType, valueType>
}
}
I know there's something like `TypeToken` in guava, but would it help me at all in this scenario when types have to be first gotten from configuration?
EDIT:
To be honest `TypedProducer` implementation shouldn't make a difference (you can treat it as if I were instantiating e.g. a `Map<K, V>`) but if it makes it easier part of the implementation below:
public class TypedProducer<K, V> {
ExternalApiProducer<K, V> externalApiProducer;
public TypedProducer() {
externalApiProducer = new ExternalApiProducer<>();
}
public Map<K, V> produceRecords() {
//some code that calls externalApiProducer to produce records
}
}
|
[enter image description here][1]there is a red line showing in my html boilerplate VScode. this shows when I hover over it (Unknown word (CssSyntaxError)stylelint(CssSyntaxError).
Beginner level stuff I know but will be glad if someone can help
tried to google about css syntax error, re-installed vs code and used the debug option within.
UPDATE: I just uninstalled styleLint and styleLint plus extensions and the error appears to be gone
[1]: https://i.stack.imgur.com/ozsZC.png |
### Maximum line length
According to [An Introduction to R](https://cran.r-project.org/doc/manuals/R-intro.html#R-commands_003b-case-sensitivity-etc):
> Command lines entered at the console are limited[4] to about 4095 bytes (not characters).
The [4] is a footnote which says:
> some of the consoles will not allow you to enter more, and amongst those which do some will silently discard the excess and some will use it as the start of the next line.
This behaviour seems inconsistent across IDEs. I just created a line of >220k ASCII characters (so >220k bytes) and was able to parse it in VS Code.
```
# real version repeats this string until it is over 220k characters long
nchar(
"very_long_string_very_long_string_very_long_string_very_long_string_"
)
```
VS Code output:
```r
[1] 221760
```
But I tried the same line in RStudio and it could not handle it. RStudio output:
```
tringvery_long_stringvery_long_stringvery_long_string"
+ )
+
+
```
RStudio seems to be exhibiting the behaviour in the footnote and silently discarding the excess, as it appears to be waiting for the quote to be closed.
In any case, despite being in the R manual, the issue seems to be IDE related. Another reason to use VS Code, if anyone needed one.
### Maximum vector length
There is also a maximum length for a vector which depends on whether you're using 32/64 bit R, the R version and the type of vector. It's probably 2^52-1 elements for an integer or numeric vector with a reasonably modern R version on a computer that isn't very old. See [Long vectors](https://cran.r-project.org/doc/manuals/r-release/R-ints.html#Long-vectors) in R Internals. |
Here is the column in entity,
```
@SequenceGenerator(name = "SLLNO_GENERATOR", sequenceName = Config.DB_SCHEMA_MERCH_PAY+".MERCHANT_PAY_DETAILS_SEQ", allocationSize=1)
@GeneratedValue(strategy = GenerationType.AUTO, generator = "SLLNO_GENERATOR")
@Column(name="SLNO")
private Long slno; //NUMBER
```
in database this column has the NOTNULL constraint.
but this is not the primary key.
This sequence is in the database. all the permissions are granted.
in runtime, I need to insert data into this table.
since the value of this column is generated by a sequence, it gives below error,
```
Mar 26, 2024 7:00:38 PM org.hibernate.engine.jdbc.spi.SqlExceptionHelper logExceptions
WARN: SQL Error: 1400, SQLState: 23000
Mar 26, 2024 7:00:38 PM org.hibernate.engine.jdbc.spi.SqlExceptionHelper logExceptions
ERROR: ORA-01400: cannot insert NULL into ("MMONEY_MERCHPAY"."MERCHANT_PAY_DETAILS"."SLNO")
Mar 26, 2024 7:00:38 PM org.hibernate.engine.jdbc.batch.internal.AbstractBatchImpl release
INFO: HHH000010: On release of batch it still contained JDBC statements
org.springframework.dao.DataIntegrityViolationException: could not execute statement; SQL [n/a]; constraint [null]
```
Anyone knows what is the reason for this issue... |
I agree that the documentation is confusing, you can use synchronize option. I know it is not mentioned in the documentation :
@ViewEntity({
name: 'existing_view',
synchronize: false
})
or
@ViewEntity('existing_view', {
synchronize: false
}) |
First off, apologies that this is win32 code. But please note that OP
has asked for code in the MS windows platform. That said, this code CAN be ported to other platforms. Confidence is high that you only need to replace a handful of basic APIs used in this code:
```none
_findfirst64() -- start the file-finding process
_findnext64() -- continue the file-finding process
_findclose() -- end the file-finding process(usually mandatory)
```
...and the related data structure. Every platform has some form of these functions, yes?
Not much more needs to be said about the code. It is copiously commented and leverages printf() heavily so you can always see what's going on. When you go live you'll want to comment-out all those printf()s.
The code will faithfully sum all the files on a drive ( because it will recurse through *everything* ) or it will act upon a single folder. It just depends on how you call get_dir_size() ( please see its header notes ).
This code is compiled and tested by me today. Output is an actual screen scrape of the console window while it ran.
#define _CRT_SECURE_NO_DEPRECATE
#if 0
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
#else
#define MAX_PATH 260
typedef int BOOL;
#define FALSE 0
#define TRUE 1
#endif
#include <string.h> /* strcmp() */
#include <stdio.h> /* printf() */
#include <stdlib.h> /* malloc() */
#include <conio.h> /* getch() */
#include <io.h> /* findfirst(), findnext()... */
#define IS_SLASH_TOKEN(x) (((x)=='\\') || ((x)=='/'))
#define LAST_CHAR(s) (((char *)(s))[strlen((char *)(s)) - 1])
/*---------------------------------------------------------------------------------------------
valid_directory_name()
- helper for get_dir_size()
Once upon a time directories beginning with dots were only references to the working or
parent directories. But now software vendors are creating directories for file
storage that may also begin with a dot.
This function will discern a whether a directory is for storage or is a system reference.
2022.12.02 Created
2023.07.25 Code streamlined but logic unchanged
*--------------------------------------------------------------------------------------------*/
BOOL valid_directory_name(const char *psdir)
{
int iii = -1;
if(psdir)
{
/* Is this a dot (.) (working directory) or dot dot (..) parent directory entry?
* Or is it a legit directory that merely begins with a dot or two ?
*/
while(psdir[++iii] == '.');
return (psdir[iii] != '\0');
}
return FALSE;
}
/*----------------------------------------------------------------------------
dir_strcat()
- helper for get_dir_size()
Properly Combines two direcotry names to form a path, caring for the
slash token ('\') as rqr'd. Make this a forward slash for Unix.
*---------------------------------------------------------------------------*/
void dir_strcat(char *dest, const char *upperDir, const char *lowerDir)
{
if(IS_SLASH_TOKEN(LAST_CHAR(upperDir)))
sprintf(dest,"%s%s", upperDir,lowerDir);
else
sprintf(dest,"%s\\%s",upperDir,lowerDir);
}
/*------------------------------------------------------------------------------------------
get_dir_size()
Given a folder name, recurses into the folder summing all the file sizes
found. If folder contains a sub folder, that folder is parsed too.
This continues until no more sub folders are found and all the file sizes
have been summed.
Call with a valid path and the long long sum parameter set to zero, ie:
size = get_dir_size("myfolder", 0 );
This is also legit (start at root). This will sum all the file sizes on the drive:
size = get_dir_size("\\", 0 );
RETURNS: sum of all file sizes found in the folder.
*-------------------------------------------------------------------------------------------*/
long long get_dir_size(char *path, long long sum)
{
char *wholepath = malloc(MAX_PATH+1);
struct _finddatai64_t sd_fdata ={0};
intptr_t handle;
if( wholepath )
{
printf("ENTRY: get_dir_size(%s) entry, malloc OK.\n", path);
printf("\tCurrent sum %.3f MB.\n",((double)sum/(1024.0*1024.0)));
dir_strcat(wholepath, path, "*.*");
handle = _findfirst64(wholepath, &sd_fdata);
if(handle != -1)
{
do
{
if((sd_fdata.attrib & _A_SUBDIR) == _A_SUBDIR)
{
/* If this is a directory ...*/
if(valid_directory_name(sd_fdata.name))
{
/* ...and this is a valid directory, recurse into it. */
dir_strcat(wholepath, path, sd_fdata.name);
if(LAST_CHAR(wholepath) != '\\')
strcat(wholepath, "\\");
sum = get_dir_size( wholepath, sum);
}
}
else /* This is a file. Add its size to the running sum */
{
printf("get_dir_size() adding size %.3f KB for file %s\n",
(double)sd_fdata.size/1024.0, sd_fdata.name);
sum += sd_fdata.size;
}
}while(!_findnext64(handle, &sd_fdata));
_findclose(handle);
}
else
{
printf("get_dir_size(%s) BAD HANDLE!!!!\n", path);
}
free(wholepath);
}
else
{
printf("get_dir_size(%s) MEMORY ALLOCATION ERROR!!!!\n", path);
}
printf("get_dir_size(%s) Returning sum: %.3f MB.\n",
path, (double)sum/(1024.0*1024.0));
return sum;
}
/*----------------------------------------------------------------------------
main()
*---------------------------------------------------------------------------*/
int main()
{
char path_only[MAX_PATH] = "F:\\PROJECTS\\32bit\\_HELP\\C\\H160";
long long current_space_used;
printf("\nStarting path will be: [%s]\n", path_only);
current_space_used = get_dir_size( path_only, 0 );
printf("\n-----------> This path currently uses %.3lf MB <------------ \n",
current_space_used/(1024.0*1024.0));
_getch();
return 0;
}
Output:
```none
Starting path will be: [F:\PROJECTS\32bit\_HELP\C\H160]
ENTRY: get_dir_size(F:\PROJECTS\32bit\_HELP\C\H160) entry, malloc OK.
Current sum 0.000 MB.
get_dir_size() adding size 0.291 KB for file CLEAN.bat
get_dir_size() adding size 147.000 KB for file CODE.bsc
------------ some lines deleted for brevity -------------
get_dir_size(F:\PROJECTS\32bit\_HELP\C\H160\Debug\) Returning sum: 0.916 MB.
get_dir_size() adding size 0.191 KB for file EDIT_ALL.bat
get_dir_size() adding size 7.249 KB for file MAIN.c
get_dir_size() adding size 0.041 KB for file run.bat
ENTRY: get_dir_size(F:\PROJECTS\32bit\_HELP\C\H160\Versions\) entry, malloc OK.
Current sum 0.923 MB.
get_dir_size() adding size 13.497 KB for file CODE.001.zip
get_dir_size(F:\PROJECTS\32bit\_HELP\C\H160\Versions\) Returning sum: 0.936 MB.
get_dir_size(F:\PROJECTS\32bit\_HELP\C\H160) Returning sum: 0.936 MB.
-----------> This path currently uses 0.936 MB <------------
```
|
I've been trying to implement CSP on my Next.js application, and I've managed to get it working when running `next build` and `next start`, but for some reason when running `next dev` it doesn't render at all, and the following error is logged on the console:
```
Uncaught EvalError: Refused to evaluate a string as JavaScript because 'unsafe-eval' is not an allowed source of script in the following Content Security Policy directive: "script-src 'self'".
at ./node_modules/next/dist/compiled/@next/react-refresh-utils/dist/runtime.js (react-refresh.js?ts=1710261451116:30:26)
```
I tried following the suggestions in the [Next.js docs][1] as well as in [this GitHub discussion][2] of generating a nonce using middleware, but I was only able to get CSP working by adding the header to `next.config.js`. I'm also using the pages router, if that info is helpful. This is what the relevant parts of my `next.config.js` file looks like:
```js
const cspHeader = `
default-src 'self';
script-src 'self';
style-src 'self' 'unsafe-inline' fonts.googleapis.com;
font-src 'self' https:;
img-src 'self' blob: data:;
object-src 'none';
connect-src 'self' https:;
upgrade-insecure-requests;
`
module.exports = {
...
async headers() {
return [
{
source: '/(.*)',
headers: [
{
key: 'Content-Security-Policy',
value: cspHeader.replace(/\n/g, ''),
},
],
},
]
},
}
```
Any idea on how to resolve this issue (preferably without modifying the CSP header just for local development)?
[1]: https://nextjs.org/docs/pages/building-your-application/configuring/content-security-policy
[2]: https://github.com/vercel/next.js/discussions/49648 |
i have data and i have succeeded in calling it but inside the data there is a set i couldn't
i summoned the data correctly title, description and imgUrl by using map function, but I didn't know how to summon array (technologies) in a list
### Component:
```reactjs
export const ProjectCard = ({ title, description, imgUrl, technologies }) => {
return (
<Col size={12} sm={6} md={4}>
<div className="project-imgbox">
<img src={imgUrl} alt="" />
<div className="project-text">
<h4>{title}</h4>
<span>{description}</span> <span>{technologies}</span>
</div>
</div>
</Col>
);
};
```
### Data:
```javascript
export const all = [
{
title: "Business Startup",
description: "Design & Development",
imgurl: projImg1,
technologies: ["Html", "Javascript", "Css"],
},
{
title: "Business Startup",
description: "Design & Development",
imgurl: projImg2,
technologies: ["Html", "Javascript", "Css"],
},
{
title: "Business Startup",
description: "Design & Development",
imgurl: projImg3,
technologies: ["Html", "Javascript", "Css"],
},
];
```
I tried many things and tried to summon her within a group, but it did not work
|
How to enable an MFA "Trusted Device" in Laravel Project so user not prompted every time? |
|php|laravel|multi-factor-authentication| |
null |
{"OriginalQuestionIds":[9635968],"Voters":[{"Id":2943403,"DisplayName":"mickmackusa","BindingReason":{"GoldTagBadge":"arrays"}}]} |
I hope this will find easy understanding on the difference between
Current | Upgradable | Resovable | Latest
[![enter image description here][1]][1]
reference from flutter official documentation:
[https://dart.dev/tools/pub/cmd/pub-outdated#output-columns][2]
[1]: https://i.stack.imgur.com/YELMa.png
[2]: https://dart.dev/tools/pub/cmd/pub-outdated#output-columns |
Include expresses the idea of containment.
Extend expresses the idea of inheritance. |
I was creating a MERN project , in which i am setting a cookie by sending request to the backend route from frontend and its working as backend is generating a token and saving it the browser cookie . But when i am sending req to backend to verify the cookie then its consoling that `req.cookies.token` is undefined .
Here is my Frontend code :
```
const handleOperations = async (field) => {
try {
const res = await axios.post(
`${BASE_URL}/cards/operation`,
{
field,
},
{
headers: {
"Content-Type": "application/json",
},
}
);
// console.log(res);
if (res) {
return res;
}
console.log("Not fetched");
} catch (error) {
console.log(error);
}
};
```
Here is my Backend code :
```
export const operations = async (req, res) => {
const token = req.cookies.token;
console.log(token);
const { field } = req.body;
try {
if (!token) {
return res.status(200).json({ message: "missing field token" });
}
const response = await operationModel.findOne({ token });
if (response) {
if (field === "edit" && response.editRemain === true) {
await operationModel.findByIdAndUpdate(
{ _id: response._id },
{ editRemain: false }
);
res.status(200).json({ edit: "Allow Edit" });
} else if (field === "delete" && response.deleteRemain === true) {
await operationModel.findByIdAndUpdate(
{ _id: response._id },
{ deleteRemain: false }
);
res.status(200).json({ delete: "Allow Delete" });
} else {
res.status(200).json({ message: "NO" });
}
} else {
if (field === "edit") {
await operationModel.create({
token,
editRemain: false,
});
res.status(200).json({ message: "Created" });
} else if (field === "delete") {
await operationModel.create({
token,
deleteRemain: false,
});
res.status(200).json({ message: "Created" });
}
}
} catch (error) {
res.status(400).json({ message: error.message });
}
};
```
Here is the error : [enter image description here](https://i.stack.imgur.com/CvsV7.png)
i ahve used the cookie-parser also ,
```
const app = express();
app.use(express.json());
app.use(cors({
origin : "http://localhost:3000" ,
credentials : true
}));
app.use(cookieParser());
app.use(bodyParser.json({extended : true}));
app.use(bodyParser.urlencoded({extended : true}));
```
|
org.springframework.dao.DataIntegrityViolationException: could not execute statement; SQL [n/a]; constraint [null] |
|java|spring-boot|hibernate|jpa| |
Ok so I am supposed to design a website that will be used to administer courses and their appointments for students. While the Admin Pages and such will be accesed from a PC, the Page which Students can use to sign up for appointments will most likeley be accessed from phne and PC a lot of the time. I do like my current PC layout and Ive done a redesign for a Phone Layout. Now the thing is that I want to be able to detect when a User is accesing the Site via Phone so I can switch to my Phone Design. I dont think that my current code is Important for this question, since its not all to code specific. However, if any of you think that some Code exaples are required just tell me and Ill gladly add some. I just wanna be a bit carefull since it is company internal stuff.
So to give a rough Idea:
The PC layout has the courses stored in cards that appear next to one another. When the User selects a course card, its appointments are displayed a smaller cards below the Courses. But when I would acces this Layout with a phone the Cards are pressed together, whilst also overflowing.
The Phone Layout makes use of MudBlazors Carusel meaning that It only shows one Course Card at a time and one gets to the next Course by sliding it to the side.
Now at first I though "hey I can just use css @media to detect screen size and then adapdt my visuals", but I realized that that wont work. Specifically Im working with MudBlazor which is kind of dificult to work with when one wants to make extensive styling choices. That lead me down the road of the Phone Layout using different components then the PC Layout.
Now I could be lazy and just use the Carusel for the PC layout as well, but I would be really interested if there would be a way for me the detect the screen size/device of the user and based on that select how id like it to be rendered.
So maybe something like this
```csharp
@page "/courses"
@if(_isPhoneOrSmallScreen)
{
<CoursePagePhoneLayout/>
}
else
{
<CoursePagePCLayout/>
}
@code
{
private bool _isPhoneOrSmallScreen = false;
protected override void OnInitialized()
{
_isPhoneOrSmallScreen = DetectScreenOrPhone();
}
private bool DetectScreenOrPhone()
{
//some implementation
}
}
```
This is just an idea/example and not actual code of mine |
How to create/detect specific Phone Design or Layout for a Blazor Server-Side App |
|c#|blazor|blazor-server-side|mudblazor| |
null |
# Solution: use Type instead of Interface
### Use case:
```ts
interface GenericMap {
[key: string]: SomeType;
}
export class MyClass<SpecificMap extends GenericMap = GenericMap> {
private readonly map = {} as SpecificMap;
constructor(keys: (keyof SpecificMap)[]) {
for (const key of keys) {
this.statusMap[key] = //... whatever as SomeType ;
}
}
}
```
### Does not work:
```
interface MyMap {
myKey1: SomeType;
myKey2: SomeType;
}
const myObject = new MyClass<MyMap>('myKey1', 'myKey2');
// TS2344: Type MyMap does not satisfy the constraint GenericMap
Index signature for type string is missing in type MyMap
```
### Works but it is not type-safe anymore:
```
interface MyMap {
[key: string]: SomeType;
myKey1: SomeType;
myKey2: SomeType;
}
const myObject = new MyClass<MyMap>('myKey1', 'myKey2', 'someRandomKey');
// 'someRandomKey' is now allowed but should not
```
### Solution:
```
type MyMap = {
myKey1: SomeType;
myKey2: SomeType;
}
const myObject = new MyClass<MyMap>('myKey1', 'myKey2');
// works and does not allow random keys
```
Read more about index signatures: https://blog.herodevs.com/typescripts-unsung-hero-index-signatures-ddc3d1e34c9f |
I have implemented it in my project in the service class I used:
keycloakService.keycloakEvents$.subscribe({
next(event) {
if (event.type == KeycloakEventType.OnTokenExpired) {
keycloakService.updateToken(20).then(updated => {
if (!updated) {
keycloakService.logout(environment.baseUrlFrontend+ "/logout");
}
});
}
}
});
Using the keycloak-angular integration node package which provides prettymuch all you need for the case. I guess it would be nice to export this out of the angular service but I didn't dig deeper on that since it worked for me. |
I have researched this and have in mind how to do parts of it, but don't quite understand how to put it all together. When someone is creating a new record in my app, if the results of the `InvestigationLog.CourtType` dropdown is either "Transfer In (F)" or "Transfer In (M)", then I need to fill a text field, `InvestigationLog.TCaseNo` with a Case Number, but ONLY if it is one of those two choices. (There are other choices besides these two.)
The case number needs to be in the format of `T(then F or M depending upon the 2 choices above)-###-YY`. For example, `TF-001-24` would be the first F case of 2024.
My current .cshtml for this field is:
```html
<td style="width: 25%">
<div class="mb-3">
<label asp-for="InvestigationLog.TCaseNo"></label>
<input asp-for="InvestigationLog.TCaseNo" id="TCaseNumber" type="text" class="form-control" readonly="readonly" />
</div>
</td>
```
And the .cshtml for the Investigation.CourtType field is:
```
<td style="width: 25%">
<div class="mb-3">
<label asp-for="InvestigationLog.CourtType"></label>
<select asp-for="InvestigationLog.CourtType" id="SelectCourtType" class="form-select" asp-items="@(new SelectList(Model.DisplayCourtTypeData.OrderBy(x => x.CourtTypeDesc),"CourtTypeDesc", "CourtTypeDesc"))" onchange="assignCaseNo" ><option value="" selected disabled>---Select Court Type---</option></select>
</div>
</td>
```
I would like this `TCaseNo` to populate once someone has tabbed out of the `InvestigationLog.CourtType` dropdown. If they go back (before submitting and binding the record) to change the value of the `CourtType` dropdown, I would like it to change accordingly.
In my research on here, I have come up with this to increment the 'F' and 'M' `TCaseNo`:
```csharp
var year = DateTime.Now.Year;
int incFCaseNo=0;
"F" + String.Format("{0:000}", incNumber, year);
incNumber++;
int incMCaseNo=0;
"M" + String.Format("{0:000}", incNumber, year);
incNumber++;
```
My issue is how do I get that INTO the `<input asp-for="InvestigationLog.TCaseNo"...>`?
Assuming I need an if statement something like this in a javascript function for the assignCaseNo. Just don't know enough Javascript to do this correctly. Here is my pitiful attempt:
```
<script type="text/javascript">
function assignData() {
const courtTypeChoice = courtType[$("#SelectCourtType").val()];
$("#CourtType").val(courtTypeChoice).trigger('input');
}
function updateTCaseNumber('#TCaseNumber') {
if ('#SelectCourtType').val() = "Transfer In (F)"
{
var year = DateTime.Now.Year;
int incFCaseNo = 0;
"F" + String.Format("{0:000}", incNumber, year);
incNumber++;
}
else if('#SelectCourtType').val() = "Transfer In (M)"
{
var year = DateTime.Now.Year;
int incMCaseNo = 0;
"M" + String.Format("{0:000}", incNumber, year);
incNumber++;
}
else
{
null
}
$(document).ready(function () {
$('#SelectCourtType').on('input', updateTCaseNumber);
});
}
</script>
```
Please know that I know this is far from correct, but I am trying.
I'm also not sure how to deal with the auto-generated number after the fact? For example, someone later wants to edit the `CourtType` and it is no longer a Transfer type. Should that `TCaseNo` be freed up to reuse? What happens to it? Is there standard practice on this?
Thanks in advance for your guidance on this! Afraid much of it is over my head.
|
My Activity stack order is: MainActivity, SettingActivity, ThemeSettingActivity. The ThemeSettingActivity is top activity.
When I use AppCompatDelegate.setDefaultNightMode(AppCompatDelegate.MODE_NIGHT_FOLLOW_SYSTEM) and switch themes through the system settings, only the top activity in the stack receives lifecycle callbacks, while MainActivity and SettingActivity do not receive any callbacks.
However, when I switch themes using AppCompatDelegate.setDefaultNightMode(AppCompatDelegate.MODE_NIGHT_NO) or AppCompatDelegate.setDefaultNightMode(AppCompatDelegate.MODE_NIGHT_YES) in code, if my app is in the foreground, all activities (including the top activity, MainActivity, and SettingActivity) receive lifecycle callbacks, and these callbacks are executed after the lifecycle callbacks of the top activity. The lifecycle callback order of my three activities is as follows:
please help me \~!
I hope the lifecycle callback order to be correct, with the ThemeSettingActivity's onResumed method being called last. |
Issue with Lifecycle Callback Order when Using AppCompatDelegate.setDefaultNightMode |
|android|androidx| |
null |
I have an interesting problem. I have 3 separate MVCs in a same solution (named: Public_Module, Admin_Module and Reg_Users_Module) that consume API by mapping DTOs in each MVC project. After a successful login, I try to reroute a user based on a role (Admin, RegisteredUser) to their respective Index page. Here are the MVCs controllers:
```
namespace Reg_Users_Module.Controllers
{ public class HomeController : Controller
{
private readonly ILogger<HomeController> _logger;
public HomeController(ILogger<HomeController> logger)
{
_logger = logger;
}
public IActionResult Index()
{
return View();
}
public IActionResult Privacy()
{
return View();
}
namespace Admin_Module.Controllers
{
public class HomeController : Controller
{
private readonly ILogger<HomeController> _logger;
public HomeController(ILogger<HomeController> logger)
{
_logger = logger;
}
public IActionResult Index()
{
return View();
}
public IActionResult Privacy()
{
return View();
}
namespace Public_Module.Controllers
{
public class HomeController : Controller
{
private readonly ILogger<HomeController> _logger;
private readonly IAuthorizeService _authorizeService;
private readonly IMapper _mapper;
public HomeController(ILogger<HomeController> logger, IAuthorizeService authorizeService, IMapper mapper)
{
_logger = logger;
_authorizeService = authorizeService;
_mapper = mapper;
}
public IActionResult Index()
{
return View();
}
public IActionResult Privacy()
{
return View();
}
[HttpPost]
public async Task<IActionResult> Login(UserLoginModel model)
{
if (ModelState.IsValid)
{
var loginResult = await _authorizeService.LoginAsync(new LoginDTO
{
UserName = model.UserName,
Password = model.Password
});
if (loginResult.IsSuccess)
{
var roles = await _authorizeService.GetUserRolesAsync(model.UserName);
var redirectPath = GetRedirectPathForRoles(roles);
return Redirect(redirectPath);
}
ModelState.AddModelError(string.Empty, loginResult.Message);
}
return View(model);
}
private string GetRedirectPathForRoles(IEnumerable<string> roles)
{
if (roles.Contains(StaticUserRoles.ADMIN))
{
return "/Admin_Module/Home/Index";
}
else if (roles.Contains(StaticUserRoles.USER))
{
return "/Reg_Users_Module/Home/Index";
}
else
{
return "/Home/Register";
}
}
```
As you can see, Public_Module is a startup MVC, and it should after login reroute user according to role, to either Admin_Module or Reg_Users_Module Index page. Here are routings in Program.cs classes:
```
// Reg_Users_Module:
app.UseRouting();
app.UseAuthorization();
app.MapControllerRoute(
name: "default",
pattern: "{controller=Home}/{action=Index}/{id?}");
// Admin_Module:
app.UseRouting();
app.UseAuthorization();
app.MapControllerRoute(
name: "default",
pattern: "{controller=Home}/{action=Index}/{id?}");
//Public_Module:
app.UseRouting();
app.UseAuthorization();
app.MapControllerRoute(
name: "admin",
pattern: "Admin_Module/{controller=Home}/{action=Index}/{id?}");
app.MapControllerRoute(
name: "reg_users",
pattern: "Reg_Users_Module/{controller=Home}/{action=Index}/{id?}");
app.MapControllerRoute(
name: "default",
pattern: "{controller=Home}/{action=Index}/{id?}");
```
At the end, when I run it, I get ambiguous match exception
```AmbiguousMatchException: The request matched multiple endpoints. Matches:
Reg_Users_Module.Controllers.HomeController.Index (Reg_Users_Module)
Public_Module.Controllers.HomeController.Index (Public_Module)
Admin_Module.Controllers.HomeController.Index (Admin_Module).```
Can anyone please tell me how to correctly setup this rerouting? Thanks! |
How to reroute role based user after login |
|c#|model-view-controller|routes| |
1. You cannot use an async method in this case (or at least, you cannot use the promise returned from it). Your `isActive` method should be synchronous.
```
methods: {
isActive(){
return this.editor.isActive(this.title.toLowerCase())
}
}
```
Also, you can call the editor directly in template:
```
<button
class="menu-item"
:class="{ 'is-active bg-zinc-800 rounded-sm': editor.isActive(title.toLowerCase()) }"
@click="(e) => {
e.preventDefault();
action();
}"
:title="title"
>
<i :class="`ri-${icon} ri-fw`"></i>
</button>
```
2. The error is stating that the editor is undefined. You have two options:
Make sure the editor is initiated before mounting the menu-bar.
```
<menu-bar v-if="editor" :editor="editor"></menu-bar>
```
Or, check if the editor is defined before calling it.
```
isActive() {
return !!this.editor && this.editor.isActive(this.title.toLowerCase());
}
```
Working [sandbox](https://codesandbox.io/p/sandbox/vigilant-wave-jzvdsw?file=%2Fsrc%2FMenuItem.vue%3A29%2C22).
|
The data you give for you predict is in the format string, however your model can only deal with numerical data. You should try to do some encoding for your features before passing it to your predict. |
I am using different numerical methods to understand the results yielded from different types of integrators at different time steps. I am comparing the performance of each integration method by calculating the Mean Absolute Error of the predicted energy with the analytical solution: `$$ MAE = \frac{1}{n} \sum_{i=0}^{n}\left | y_{analytical} - y_{numerical}\right | .$$`
Then for different time-steps I am calculating the resulting MAE and plotting the results in a log vs. log plot as shown below.
[](https://i.stack.imgur.com/ozMkL.png)
The relation between MAE and time-step matches my expectations (the Verlet Method scales quadratically and the Euler Cromer method scales linearly), but I am noticing that the Verlet method has a turning point at about 10^(-4) s. This seems slightly too large and I was expecting instead a turning point to arise at time-steps closer to 10^(-8) s as I am using numpy's float64, hence there are about 15 to 17 decimal places of precision.
I went onto plot the maximum and minimum errors obtained for each time step (Excluding iteration 0 as those are the initial conditions which are the same for both numerical and analytical methods) and these are the results: [](https://i.stack.imgur.com/Thyjd.png)
[](https://i.stack.imgur.com/dXxpA.png)
Again when plotting the maximum error I obtain a minimum of similar value compared to the previous plot, but plotting the minimum obtained error (these always happened in the first few iterations after the initial conditions) I obtain that the errors seem to flatten out at 10^(-4) s and approach errors of about 10^(-15) J in the energy.
Because of this flattening of the minimum errors, it makes sense that going further than 10^(-4) s does not increase the precision of the Verlet's method, but I cant explain why the maximum errors grow after this point.
An explanation that comes to mind is the round off error cause by float64 that should happen when values reach about 10^(-15), 10^(-17). I have manually checked the position, velocity and acceleration that result from running the verlet method but their lowest values are of order 10^(-9), very far from 10^(-15).
(1) Is it possible that I am introducing a round off error when I am calculating the residual error from the analytical and the verlet's method?
(2) Are there other more appropriate ways of calculating the error? (I thought MAE was a good fit because the verlet method oscillates about the true system values)
(3) Are there tweaks that could be done to show possible flaws within my analysis, I have looked at my code extensively and I am not able to find any bugs, furthermore, the Verlet method I coded does have an error which scales quadratically with time step which makes me think that the code itself is fine. (Maybe a possible attempt would be to use float128 and ensure its used throughout all calculations and then see if the above plots differ?)
Thanks in advance for any help with the above questions |
> `User+App` access allows your application to act on behalf of a signed-in user that uses **Delegated** permissions where user interaction is needed to acquire token.
I registered one Azure AD application and granted `Mail.Read` permission of **Delegated** type like this:

With **Delegated** permissions, you can **only** read mails of signed-in user(/me endpoint) and shared mailbox.
**User+App access in C# (Interactive flow):**
```c#
using Azure.Identity;
using Microsoft.Graph;
using Microsoft.Graph.Models.ODataErrors;
var scopes = new[] { "https://graph.microsoft.com/.default" };
var tenantId = "tenantId";
var clientId = "appId";
var options = new InteractiveBrowserCredentialOptions
{
TenantId = tenantId,
ClientId = clientId,
AuthorityHost = AzureAuthorityHosts.AzurePublicCloud,
RedirectUri = new Uri("http://localhost"),
};
var interactiveCredential = new InteractiveBrowserCredential(options);
var graphClient = new GraphServiceClient(interactiveCredential, scopes);
try
{
var messages = await graphClient.Me.Messages.GetAsync();
foreach (var message in messages.Value)
{
Console.WriteLine($"Subject: {message.Subject}");
}
}
catch (ODataError odataError)
{
Console.WriteLine(odataError.Error.Code);
Console.WriteLine(odataError.Error.Message);
}
```
**Response:**

> App-only scenario lets the application act without a signed-in user using permissions of **Application** type that gives you access to read any user's mailbox in the tenant.
For this, I registered one application and granted permissions of **Application** type as below:

With **Application** permissions, you can read mails of any user present in the tenant without any login to acquire token.
**App-only access in C# (Client credentials flow)**
```c#
using Azure.Identity;
using Microsoft.Graph;
class Program
{
static async Task Main(string[] args)
{
var scopes = new[] { "https://graph.microsoft.com/.default" };
var tenantId = "tenantID";
var clientId = "appID";
var clientSecret = "secret";
var options = new TokenCredentialOptions
{
AuthorityHost = AzureAuthorityHosts.AzurePublicCloud,
};
var clientSecretCredential = new ClientSecretCredential(tenantId, clientId, clientSecret, options);
var graphClient = new GraphServiceClient(clientSecretCredential, scopes);
try
{
var messages = await graphClient.Users["sri@xxxxxxx.onmicrosoft.com"].Messages.GetAsync();
foreach (var message in messages.Value)
{
Console.WriteLine($"Subject: {message.Subject}");
}
}
catch (ServiceException serviceException)
{
Console.WriteLine(serviceException.Message);
}
}
}
```
**Response:**
 |
null |
You don't have to read the entire file in at once as there is an argument with the `read_csv()` function. You would just need to modify your code to
```
df1 <- read_csv(
"sample-data.csv",
col_names=c("D","B"),
col_select=c("D","B")
)
``` |
@Devendra I think your examples could be somehow misleading if someone reads them.
`uri_for` expects path (and not action). It return an absolute URI object, so for example it's useful for linking to static content or in case you don't expect your paths to change.
So for example, let say you've deployed your application on domain example.com and subdir abc (example.com/abc/): `$c->uri_for('/static/images/catalyst.png')` would return example.com/abc/static/images/catalyst.pn, or for example: `$c->uri_for('/contact-us-today')` would return example.com/abc/contact-us-today. If you decide later to deploy your application under another subdirectory or at / you'll still end up with correct links.
Let say that your contact-us action looks like: `sub contact :Path('/contact-us-today') :Args(0) {...}` and you decide later that /contact-us-today should become just /contact-us. If you've used `uri_for('/contact-us-today')` you'll need to find and change all lines which points to this url. However you can use `$c->uri_for_action('/controller/action_name')` which will return the correct url. |
|c#|unity-game-engine| |
null |
If you are using a service don't forget to add the service in manifest under application tag
<service android:name=".service.MyForegroundService"/>
Also, make sure necessary permissions are added and granted
<uses-permission android:name="android.permission.FOREGROUND_SERVICE"/>
<uses-permission android:name="android.permission.POST_NOTIFICATIONS"/> |
If you know that the column exists, you could proceed similarly to pandas:
```
df = pl.DataFrame({'ABC': [1,2,3], 'DEF': [4,5,6],
'XYZ': [7,8,9], 'GHI': [10,11,12]})
out = df[:, :df.columns.index('XYZ')+1]
```
Or, shorter (and more efficient):
```
out = df[:, :'XYZ']
```
Output:
```
shape: (3, 3)
βββββββ¬ββββββ¬ββββββ
β ABC β DEF β XYZ β
β --- β --- β --- β
β i64 β i64 β i64 β
βββββββͺββββββͺββββββ‘
β 1 β 4 β 7 β
β 2 β 5 β 8 β
β 3 β 6 β 9 β
βββββββ΄ββββββ΄ββββββ
``` |
{"Voters":[{"Id":2756409,"DisplayName":"TylerH"},{"Id":4685471,"DisplayName":"desertnaut"},{"Id":5468463,"DisplayName":"Vega"}],"SiteSpecificCloseReasonIds":[13]} |
#### How the Test Suites plugin works
The [documentation][1] on the plugin says:
> This [```implementation project()```] dependency provides access to the projectβs outputs as well as any dependencies declared on its api and compileOnlyApi configurations.
Therefore, when you depend on `implementation project()` in a new test suite you will be able to see public types defined in the main source set's code, plus any dependencies declared in the `api` configuration β but not those declared in `implementation`.
The logic behind this, it would seem, is that integration tests should only see the same types that an consuming project would by default. This makes sense since they are "integration tests", testing the external, not internal, API of the project. The regular `test` source can see the regular `implementation` dependencies if such tests are needed.
#### Your case
If you do want to override this convention and access an external dependency (in your case `java-uuid-generator`) in a set of integration tests, I suggest one of the following options:
1. Change the configuration that dependency is added to to `api`:
```gy
dependencies {
api 'com.fasterxml.uuid:java-uuid-generator:4.1.0'
}
```
2. Make the integration tests' `implementation` dependency configuration extend the main `implementation` configuration, thereby picking up the latter's dependencies and not altering the API a consuming project would see:
```gy
integrationtest(JvmTestSuite) {
configurations {
named(sources.implementationConfigurationName) {
extendsFrom(getByName(JavaPlugin.IMPLEMENTATION_CONFIGURATION_NAME))
}
}
dependencies {
implementation project()
}
}
```
[1]: https://docs.gradle.org/current/userguide/jvm_test_suite_plugin.html#configure_dependencies_of_a_test_suite_to_reference_project_outputs |
The comments provided by @Brian were correct, in that I was not originally signing enough values, and the values I was signing were not alphabetically sorted by their key. Furthermore, when trying to use the `restletBaseString` function provided by NetSuite, I was generating a new nonce and a new timestamp to create the base string, and then still using the nonce and timestamp I created previously in the `$authorizor` array when creating the headers. This caused a mismatch of values between the signature and the headers. |
How to get the nearby 5 to 8 wifi record with ssid and signal strength with or without any plugin |
|objective-c|nsstring| |
You asked how to declare a foreign key from `Section` to `TimeSlot`.
That would be explicity stating that each row in `Section` refers to ***one*** row in `TimeSlot`. Your comment confirms that is ***not*** what you want.
What you're describing requires a third table to act as a link / association.
```
CREATE TABLE DimTimeSlot (
id INT IDENTITY(1,1),
time_slot_id NVARCHAR(4) PRIMARY KEY
);
create table Section
(
...
time_slot_id nvarchar(4),
...
--Foreign key =>(Section to DimTimeSlot)
constraint FK_Section_to_DimTimeSlot
foreign key(time_slot_id)
references DimTimeSlot(time_slot_id)
on delete set null
);
create table TimeSlot
(
time_slot_id nvarchar(4),
...
--Foreign key =>(TimeSlot to DimTimeSlot)
constraint FK_TimeSlot_to_DimTimeSlot
foreign key(time_slot_id)
references DimTimeSlot(time_slot_id)
on delete cascade
);
```
This way...
If a row in DimTimeSlot is deleted the rows in Section get set to NULL and the rows in TimeSlot get deleted.
A row in Section references one row in DimTimeSlot, but that references N rows in TimeSlot.
|
I had the same issue with a brand new project and made it work using php version 8.2.16-fpm-alpine. |
```
class Company {
final String name;
final List<Customer> customers;
const Company({required this.name, required this.customers});
Company copyWith({
String? name,
List<Customer>? customers,
}) {
return Company(
name: name ?? this.name,
customers: List.from(customers ?? this.customers),
);
}
}
``` |
null |
Why would my react app website in development look different after building it? When running npm start my code runs fine. However, when creating a build by running npm run build, some, not all, of my site text is off. What gives?
I tried updating my media queries in my app.css file.
I tried running npm install to make sure I have all me dependencies. |
Difference between my development app and production app |
|reactjs|npm|build|production| |