text
stringlengths 15
59.8k
| meta
dict |
|---|---|
Q: Print To Zebra Printer TLP 3842 How can I print to Zebra Printer TLP 3842 using EPL Programming? The printer doesn't support Zebra Printer Language (ZPL) and I am using PrintDocument(). I been on this problem for two weeks and can't figure it out. This what I have so far in my code, that actually runs the printer:
private System.ComponentModel.Container Components;
private System.Windows.Forms.Button PrintButton;
private Font PrintFont;
private StreamReader StreamToPrint;
private void PrintButton_Click(Object Sender, EventArgs e)
{
try
{
StreamToPrint = new StreamReader("C:\\Users\\jcabrera\\Desktop\\MyFile.txt");
//string ZPL_STRING = "^XA^LL440,^FO50,50^A0N,50,50^FDTesting Zebra Printer^FS^XZ";
// ZPL Command(s)
try
{
PrintFont = new Font("Arial", 10);
PrintDocument PD = new PrintDocument();
PD.PrintPage += new PrintPageEventHandler(this.PD_PrintPage);
PD.Print();
}
finally
{
StreamToPrint.Close();
}
} catch(Exception ex)
{
MessageBox.Show(ex.Message);
}
}
private void PD_PrintPage(object Sender, PrintPageEventArgs ev)
{
float linesPerPage = 0;
float yPos = 0;
int count = 0;
float leftMargin = ev.MarginBounds.Left;
float topMargin = ev.MarginBounds.Top;
string line = null;
// Calculate the number of lines per page.
linesPerPage = ev.MarginBounds.Height /
PrintFont.GetHeight(ev.Graphics);
// Print each line of the file.
while (count < linesPerPage &&
((line = StreamToPrint.ReadLine()) != null))
{
yPos = topMargin + (count *
PrintFont.GetHeight(ev.Graphics));
ev.Graphics.DrawString(line, PrintFont, Brushes.Black,
leftMargin, yPos, new StringFormat());
count++;
}
// If more lines exist, print another page.
if (line != null)
ev.HasMorePages = true;
else
ev.HasMorePages = false;
}
private void InitializeComponent()
{
this.Components = new System.ComponentModel.Container();
this.PrintButton = new System.Windows.Forms.Button();
this.ClientSize = new System.Drawing.Size(504, 381);
this.Text = "Print Example";
PrintButton.ImageAlign = System.Drawing.ContentAlignment.MiddleLeft;
PrintButton.Location = new System.Drawing.Point(32, 110);
PrintButton.FlatStyle = System.Windows.Forms.FlatStyle.Flat;
PrintButton.TabIndex = 0;
PrintButton.Text = "Print the file.";
PrintButton.Size = new System.Drawing.Size(136, 40);
PrintButton.Click += new System.EventHandler(PrintButton_Click);
this.Controls.Add(PrintButton);
}
Using Windows Application project.
A: You need to add RawPrinterHelperClass to your project, and then print like this
string ZPL_STRING = "^XA^LL440,^FO50,50^A0N,50,50^FDTesting Zebra Printer^FS^XZ";
RawPrinterHelper.SendStringToPrinter("PrinterName", ZPL_STRING)
C# Class
https://github.com/andyyou/SendToPrinter/blob/master/Printer/RawPrinterHelper.cs
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/51754966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Creating a hard link to partial file contents in linux I have fileA, which has a size of 200 MB. Now I want to create a hardlink to fileA, named fileB, but I only want this file point to the first 100 mb of fileA. So basically I need fileB to point to the same data blocks, but with a different length. It doesn't necessarily have to be a real hardlink it could be a virtual file proxying contents.
I was thinking about duplicating the Inode somehow and changing the length, but I presume this could cause filesystem coherency issues (when datablocks move around etc.). Is there any linux tool or user-level system call that could let me do this?
A: You cannot do this directly in the manner you are talking about. There are filesystems that sort of do this. For example LessFS can do this. I also believe that the underlying structure of btrfs supports this, so if someone put the hooks in, it would be accessible at user level.
But, there is no way to do this with any of the ext filesystems, and I believe that it happens implicitly in LessFS.
There is a really ugly way of doing it in btrfs. If you make a snapshot, then truncate the snapshot file to 100M, you have effectively achieved your goal.
I believe this would also work with btrfs and a sufficiently recent version of cp you could just copy the file with cp then truncate one copy. The version of cp would have to have the --reflink option, and if you want to be extra sure about this, give the --reflink=always option.
A: Adding to @Omnifarious's answer:
What you're describing is not a hard link. A hard links is essentially a reference to an inode, by a path name. (A soft link is a reference to a path name, by a path name.) There is no mechanism to say, "I want this inode, but slightly different, only the first k blocks". A copy-on-write filesystem could do this for you under the covers. If you were using such a filesystem then you would simply say
cp fileA fileB && truncate -s 200M fileB
Of course, this also works on a non-copy-on-write filesystem, but it takes up an extra 200 MB instead of just the filesystem overhead.
Now, that said, you could still implement something like this easily on Linux with FUSE. You could implement a filesystem that mirrors some target directory but simply artificially sets a maximum length to files (at say 200 MB).
FUSE
FUSE Hello World
A: Maybe you can check ChunkFS. I think this is what you need (I didn't try it).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/5054709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: deserialize json without property name .Net I'm using JSON.NET to deserialize response from HTTP, but I'm stuck with an issue. That's because the response sending results wthout property names
i have something like this
{"people":{"jack":{"condition":"good","version":"1.0.5"},"jim":{"condition":"bad","version":"1.0.5"}},"hede":14,"hodo":"apple"}
how can I put this in a class.
not: json result can have more than jack and jim when i paste as a class it creates jim and jack classes. bob and mike will come soon.
sorry bro. i was just writing my effort but my daughter woke up. went to send her to bed back. what I mean is, after searching "good" in dictionaries i want to go back and find "jack".
var values = from value in dictioanries.values
where value.condition ="good"
select new { value.condition, value.version }
ok here I searched the dictionaries, there are 2 of them(jack and jim) i want to know under which dictionary i found "good"
A: You don't have a class as much as you have a dictionary of string and objects. If you do the following, you should be able to deserialize properly:
public class PeopleResponse
{
public Dictionary<string, Info> people { get; set; }
public string hede { get; set; }
public string hodo { get; set; }
}
public class Info
{
public string condition { get; set; }
public string version { get; set; }
}
From there, you should be able to do:
var results = JsonConvert.DeserializeObject<PeopleResponse>("{myJSONGoesHere}");
Edit: Based on your updated question, if you would like to get the names of all whose condition is "good," you can do the following (assuming your deserialized object is called results:
var goodStuff = from p in results.people
where p.Value.condition.ToLower() == "good"
select p.Key;
You could, of course, just get p instead of p.Key, which would return the entire key/value pair.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/37242856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-3"
}
|
Q: Manually upload symbols to Azure DevOps We are running an Azure DevOps Server on premise.
We use the git repos and the artifacts nuget feed for our project.
Currently we do our builds (for the nugets) manually without the pipelines and upload the nugets later.
If we would use the pipelines the "Index sources and publish symbols" task would place the symbols to a file share.
So I have two questions:
Is there a way to upload the symbols manually (e.g. via CLI like we can with the nugets)?
Anywhere a documentation how the URL of the file share would look like on an on premise server? All examples I found so far only show dev.azure.com
A:
Is there a way to upload the symbols manually (e.g. via CLI like we can with the nugets)?
The answer is yes.
You could use the push both primary and symbol packages at the same time using the below command. Both .nupkg and .snupkg files need to be present in the current folder:
nuget push MyPackage.nupkg -source
And publish to a different symbol repository, or to push a legacy symbol package that doesn't follow the naming convention, use the -Source option:
nuget push MyPackage.symbols.nupkg -source https://nuget.smbsrc.net/
Please refere the document for some more details:
Publishing a symbol package
Publishing a legacy symbol package
Anywhere a documentation how the URL of the file share would look like
on an on premise server?
You could check the document Index Sources & Publish Symbols task:
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72398369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to export a SSHA-256 password in LDAP I want to export a password from a database where the scheme for saving passwords is SSHA-256. The password of the database looks like {"salt", crypt("pass"+"salt")} and here is an example: WbwTWc,BFWTku+j6Up2XovqmpNFATe4g9aEWWW1shFysmzx/QY{SHA-256}.
Now I want to copy this password to my Open LDAP server. I have already copied SHA-256 password in the LDAP by using {SHA-256}+"Encrypted Password" and it worked fine.
Could anyone please help me out how can I import this SSHA-256 password to my LDAP server. Any help will be highly appreciated and thanks in advance.
A: What LDAP server is this? If it supports SSHA-256, then the same style i.e. {SSHA-256}+"Encrypted Password" should work.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/37696347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: jquery-autocomplete does not work with my django app I have a problem with the jquery-autocomplete pluging and my django script. I want an easy to use autocomplete plugin. And for what I see this (http://code.google.com/p/jquery-autocomplete/) one seems very usefull and easy. For the django part I use this (http://code.google.com/p/django-ajax-selects/) I modified it a little, because the out put looked a little bit weired to me. It had 2 '\n' for each new line, and there was no Content-Length Header in the response. First I thought this could be the problem, because all the online examples I found had them. But that was not the problem.
I have a very small test.html with the following body:
<body>
<form action="" method="post">
<p><label for="id_tag_list">Tag list:</label>
<input id="id_tag_list" name="tag_list" maxlength="200" type="text" /> </p>
<input type="submit" value="Submit" />
</form>
</body>
And this is the JQuery call to add autocomplete to the input.
function formatItem_tag_list(row) {
return row[2]
}
function formatResult_tag_list(row) {
return row[1]
}
$(document).ready(function(){
$("input[id='id_tag_list']").autocomplete({
url:'http://gladis.org/ajax/tag',
formatItem: formatItem_tag_list,
formatResult: formatResult_tag_list,
dataType:'text'
});
});
When I'm typing something inside the Textfield Firefox (firebug) and Chromium-browser indicates that ther is an ajax call but with no response. If I just copy the line into my browser, I can see the the response. (this issue is solved, it was a safety feature from ajax not to get data from another domain)
For example when I am typing Bi in the textfield, the url "http://gladis.org/ajax/tag?q=Bi&max... is generated. When you enter this in your browser you get this response:
4|Bier|Bier
43|Kolumbien|Kolumbien
33|Namibia|Namibia
Now my ajax call get the correct response, but there is still no list showing up with all the possible entries. I tried also to format the output, but this doesn't work either. I set brakepoints to the function and realized that they won't be called at all.
Here is a link to my minimum HTML file http://gladis.org/media/input.html
Has anybody an idea what i did wrong. I also uploaded all the files as a small zip at http://gladis.org/media/example.zip.
Thank you for your help!
[Edit]
here is the urls conf:
(r'^ajax/(?P<channel>[a-z]+)$', 'ajax_select.views.ajax_lookup'),
and the ajax lookup channel configuration
AJAX_LOOKUP_CHANNELS = {
# the simplest case, pass a DICT with the model and field to search against :
'tag' : dict(model='htags.Tag', search_field='text'),
}
and the view:
def ajax_lookup(request,channel):
""" this view supplies results for both foreign keys and many to many fields """
# it should come in as GET unless global $.ajaxSetup({type:"POST"}) has been set
# in which case we'll support POST
if request.method == "GET":
# we could also insist on an ajax request
if 'q' not in request.GET:
return HttpResponse('')
query = request.GET['q']
else:
if 'q' not in request.POST:
return HttpResponse('') # suspicious
query = request.POST['q']
lookup_channel = get_lookup(channel)
if query:
instances = lookup_channel.get_query(query,request)
else:
instances = []
results = []
for item in instances:
results.append(u"%s|%s|%s" % (item.pk,lookup_channel.format_item(item),lookup_channel.format_result(item)))
ret_string = "\n".join(results)
resp = HttpResponse(ret_string,mimetype="text/html")
resp['Content-Length'] = len(ret_string)
return resp
A: You probably need a trailing slash at the end of the URL.
Also, your jQuery selector is wrong. You don't need quotes within the square brackets. However, that selector is better written like this anyway:
$("input#id_tag_list")
or just
$("#id_tag_list")
A: Separate answer because I've just thought of another possibility: is your static page being served from the same domain as the Ajax call (gladis.org)? If not, the same-domain policy will prevent Ajax from being loaded.
A: As an aside, assuming your document.ready is in your Django template, it would be a good idea to utilize the {% url %} tag rather than hardcoding your URL.
$(document).ready(function(){
$("input[id='id_tag_list']").autocomplete({
url:'{% url my_tag_lookup %}',
dataType:'text'
});
});
This way the JS snippet will be rendered with the computed URL and your code will remain portable.
A: I found a solution, but well I still don't know why the first approach didn't worked out. I just switched to a different library. I choose http://bassistance.de/jquery-plugins/jquery-plugin-autocomplete/. This one is actually promoted by jQuery and it works ;)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2336666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: update gmail contacts C# exception I am trying to update gmail contacts info
Contact updatedContact = contact.Contact;
updatedContact.Content = "Contact information for " + contact.Contact.Name.FullName;
Uri feedUri = new Uri(ContactsQuery.CreateContactsUri("default"));
RequestSettings rs2 = new RequestSettings("CreateContacts", username, password);
ContactsRequest cr = new ContactsRequest(rs2);
Contact createdContact = cr.Update(updatedContact);
but I got that exception
"execution of request failed http://www.google.com/m8/feeds/contacts/"mail"/full/..."
any ideas?
A: It's an issue with your machine running the code, it may work on other machines.
If you are behind a proxy, here is an article on how to setup properly with proxies:
http://code.google.com/apis/gdata/articles/proxy_setup.html
A: I find the reason of the exception
there is no problems appears when names are updated like that
contact.Name.FullName = value;
but when update phone numbers like that the above exception appears
contact.Phonenumbers.Add(new Google.GData.Extensions.PhoneNumber(value));
it appears that gmail return the same exception regardless of what error had happened, how can I understand that from just "execution of request failed", that is quite annoying.
I hope they add some details, though I do not know what is wrong in updating phone numbers like that.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/3650879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Git push/clone not working using HTTPS without any sort of firewall/proxy only in Ubuntu Git push when using HTTP gives this
fatal: unable to access 'https://github.com/something.git/':
Failed to connect to github.com port 443: Network is unreachable
While git clone gives this
fatal: unable to access 'http://github.com/something.git/':
Failed to connect to github.com port 80: Network is unreachable
The same works on Windows using HTTP on same machine and on Ubuntu using SSH.
I tried looking at git config files but couldn't achieve anything.
I can access it via browser and HTTP remote for Heroku doesn't work too.
I tried purging and installing again but it made no difference.
ping github.com works; if that makes a difference
This is a personal computer at home directly connected to router and I have not enabled any firewall consciously.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/34584688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Getting notified about iframe loading I am developing an extension where I need to get notified whenever an iframe is loaded and ready. I used page-mod but I don't get the expected output. This is my code:
var data = require("sdk/self").data;
var pageMod = require("sdk/page-mod");
pageMod.PageMod({
include: ['*'],
contentScriptFile: data.url("pageNavData.js"),
contentScriptWhen: "ready",
attachTo: ["frame"],
onAttach: function(worker) {
worker.port.on("gotElement", function(elementContent) {
console.log(elementContent);
});
}
});
And pageNavData.js is:
self.port.emit("gotElement", document.location.href);
Can anybody see what is wrong with this?
A: The problem here is that the "gotElement" message is emitted before the listener is attached.
You can fix it with:
setTimeout(_ => self.port.emit("gotElement", document.location.href));
Afaict you don't need the contentScript, just do what you wanted to do in the onAttach handler.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/23358771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What should i do to get "telegram private group ids" new way I want to create new bot that print users first name in telegram groups.
So it could be easily done in public groups but in private groups I need its id so I have to add bot to private groups and then what should I do?
I want to do it in Python or Pyrogram also I am not interested to use telegram API like this:
https://api.telegram.org/bot<token>/getUpdates
because I can not use it when I am using Python-telegram bot library.
The main problem is how am I suppose to get private group ids after I add the bot to it?
A: simply make the bot an admin in the targeted group, then the bot will be able to read messages from the group, thus you'll get the id
Or disable privacy mode from BotFather, check the link below:
https://core.telegram.org/bots#privacy-mode
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/61734989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Recursively move folders/files within subfolders to its parent I have a high level folder (call it 'level 0'), which contains a hundred or so subfolders ('level 1'), and within those subfolders consist of lower level subfolders and files ('level 2'). I need to dynamically move every file and folder from 'level 2' to its level 1 subfolder (or relatively speaking - its parent).
To better visualize:
Master folder
β Parent folder 1
β ββ Subfolder A
β β ββ File A
β β ββ File B
β ββ Subfolder B
β β ββ File C
β β ββ File D
β ββ File E
β ββ File F
.
. ... many folders more ...
.
ββ Parent folder 134
ββ Subfolder CS
β ββ File AGF
β ββ File ARH
ββ File ROQ
ββ File JGL
I need to move everything from any subfolders' content within its parent folder. As above, you can see that there may be some files already in the parent folder (e.g. Files E, F) and they should stay put.
Objective:
Master folder
ββ Parent folder 1
β ββ File A
β ββ File B
β ββ File C
β ββ File D
β ββ File E
β ββ File F
ββ Parent folder 119
β ββ File AZA
β ββ File AZB
β ββ File AZC
β ββ File AZD
... and so on
The challenge here is that there are over a hundred of these parent folders beneath the master folder and all have different names. The subfolders within the parent folders also have different names as well.
I've tried approaching this using Get-ChildItem then with a 'ForEach' attempting to assign the child item (i.e. folder) as a variable and then performing another Get-ChildItem recursive and moving all content to the parent, but I'm getting nowhere and if any movement, it seems to be referring back to C:\Windows\ folder.
Note [sorry, correction Not*] literal code, but the approach, I'm thinking of:
$master = "\\data\Master"
gci $master | foreach {
$parent = $_
gci ... | Move-Item -Destination $parent
}
A: I did try to make something for you. This will move everything from level2 folder to the level1 folder. Pls try it, and let me know how its working.
$master = "C:\Temp\Level0\"
Get-ChildItem $master | ForEach-Object {
$dest = ($_.fullname)+"\"
$Loc = (Get-ChildItem $_.FullName | Select-Object -ExpandProperty fullname)+"\*"
Move-item -Path $Loc -Destination $dest
}
A: This should do what you want. It will however not move any files with the same filename to the parent folder if the filename has already been used.
$FilePath = Read-Host "Enter FilePath"
$FileSource = Get-ChildItem -Path $FilePath -Directory
foreach ($Directory in $FileSource)
{
$Files = Get-ChildItem -Path $Directory -File
foreach ($File in $Files){
Move-item -Path $File.Fullname -Destination $FilePath
}
}
Hope that is of some use.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/50193121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Docker NodeJS Azure ServiceBus Service Lock Timeout Apart from changing "Lock Duration" setting in Azure Portal, I want to know how to set the timeout value/ renew lock mechanism. (Better to handle unknown long running tasks)
var azure = require('azure-sb'),
serviceBusService = azure.createServiceBusService("Endpoint=XXX");
serviceBusService.receiveQueueMessage(MESSAGE_QUEUE_NAME, { isPeekLock: true }, function(error, lockedMessage){
... task running longer than "Lock Duration" ...
}
When finish, there is error and the message is moved to Deadletter Queue:
Error: 404 - The lock supplied is invalid. Either the lock expired, or
the message has already been removed from the queue.
A: You can try and use renewLockForMessage to extend the lock.
Hope it helps!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/53390968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Why R/W transition mid-file in rb+ mode fails unless I use fseek(fp,0,SEEK_CUR)?Why it works at end of file? I had never realized this.I could have very well assumed subconsciously as a hard fact that I can transition between reading and writing on an existing file opened it update mode,just like that.But two questions on SO(1,2) drove me skeptical and I decided to try it out.Here's what I find:
In the first program,prog1,I work on a file source.txt which only has the line Lethal weapon.I read the first world Lethal using fscanf() and intend to write " musket" right after that expecting to get Lethal musket.But it simply fails, and I still get the original content Lethal weapon at the end of the write operation.But in that first program,if I insert the line fseek(fp,0,SEEK_CUR), the write operation works fine and I get Lethal musket.I notice that fseek(fp,0,SEEK_CUR) serves no purpose other than call fseek() just for the sake of it,as there is no net seeking at all.
But in the second program,prog2,the same scenario doesn't need thatfseek(fp,0,SEEK_CUR) statement.To be exact,in the second program,in contrast to reading to middle of file as in first program, here I read to the end of file,and then start writing there.Here the write is successful even without using fseek(fp,0,SEEK_CUR) and I get the desired content Lethal weapon sale.
Question: Why can't we transition from read to write mode in the middle of the file and what difference fseek(fp,0,SEEK_CUR) makes to get it working?And at the same time, why the same transition works without any fuss if we read to end of file and write there?Why fseek(fp,0,SEEK_CUR) is not needed in second case?How advisable is my use of fseek(fp,0,SEEK_CUR) to make the write successful in the first program?Any better alternative?
Two questions on SO seem to have addressed the same issue to some extent,but since they were based more on seeking explanation of excerpts from texts/books,the answers oriented towards them doesn't seem to address what I intend to know in precise terms.
//PROG1
#include <stdio.h>
int main ()
{
FILE *fp=fopen("D:\\source.txt","r+");
char arr[20],brr[50];
fscanf(fp,"%s",arr);
printf("String is %s\n",arr);
//fseek(fp,0,SEEK_CUR); //Doesn't work without it
fprintf(fp," musket");
rewind(fp);
fgets(brr,50,fp);
printf("New string is %s",brr);
fclose(fp);
}
Output:
1) Without fseek() -- Lethal weapon
2) With fseek() -- Lethal musket
//PROG2
#include <stdio.h>
int main ()
{
FILE *fp=fopen("D:\\source.txt","r+");
char arr[20],brr[50];
fgets(arr,20,fp);
printf("Initial line is %s\n",arr);
fprintf(fp," sale"); //writes to end of file
rewind(fp);
printf("New string is %s",fgets(brr,50,fp));
fclose(fp);
}
Output without fseek(): -- Lethal weapon sale
A: Actually, it looks like a bug in your libc implementation.
File I/O streams are usually a libc abstraction over the file descriptor based binary I/O implemented by the OS kernel. So any strange behaviour shall be attributed to your specific libc quirks.
Since you're apparently using Windows, that may be the source of your problems. What is the compiler you're using?
There is no such problem with GCC 4.6.1 and glibc-2.13 on Ubuntu 11.10.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/16728240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: how to disable zoom slider through excel vba? I am looking to disable zoom slider in excel worksheet. I have looked around but couldn't find a solution to it. There are some solutions pointing to setting the zoom to a defined value but none of them caters to disabling the zoom option itself.
Any help would be much appreciated.
I am adding the code I have put in ThisWorkbook to disable most of the visible option on worksheet.
Private Sub Workbook_Activate()
Application.ScreenUpdating = False
Application.ExecuteExcel4Macro "SHOW.TOOLBAR(""Ribbon"",False)"
Application.DisplayFormulaBar = False
Application.DisplayStatusBar = Not Application.DisplayStatusBar
ActiveWindow.DisplayWorkbookTabs = False
Application.ScreenUpdating = True
End Sub
Private Sub Workbook_Deactivate()
Application.ScreenUpdating = False
Application.ExecuteExcel4Macro "SHOW.TOOLBAR(""Ribbon"",True)"
Application.DisplayFormulaBar = True
Application.DisplayStatusBar = True
Application.ScreenUpdating = True
ActiveWindow.DisplayWorkbookTabs = True
End Sub
Private Sub Workbook_Open()
Application.ScreenUpdating = False
'Hide list of sheets
Call hide_sheets
Windows(1).WindowState = xlMaximized
ActiveWindow.DisplayGridlines = False
Application.ExecuteExcel4Macro "SHOW.TOOLBAR(""Ribbon"",False)"
Application.DisplayFormulaBar = False
Application.DisplayStatusBar = Not Application.DisplayStatusBar
ActiveWindow.DisplayWorkbookTabs = False
ActiveWindow.Zoom = 100
'Lock cells in the UI sheet
ThisWorkbook.Sheets("UI").ScrollArea = "A1:t46"
'Hide scroll bar
With ActiveWindow
.DisplayHorizontalScrollBar = False
.DisplayVerticalScrollBar = False
End With
Application.ScreenUpdating = True
welcomeScreen.Show 0
End Sub
Some screen shots of the problem I am facing. I am currently using some makeshift arrangement by loading the maximized version userform on workbook opening. So the slider is hidden.
A: There doesn't appear to be a way to disable zoom completely or specifically the slider after looking around. If you're main mission is to avoid someone clicking on the zoom slider I would probably go with hiding the statusbar all together.
Application.DisplayStatusBar = False
A: To hide the zoom slider alone you can do so by editing the registry key
HKEY_CURRENT_USER\Software\Microsoft\Office\15.0\Excel\StatusBar
by setting ZoomSlider value to 0
Anyway, i don't think this can be achieved using VBA even with SaveSetting.
You can try write a .reg file to change the target key and VBA-load it. I'm not sure if this can be done but even if it works the user will still to have click Yes in a system prompt to allow the key to be loaded into registry.
And even when the user click Yes to allow the .reg file to load and change the registry key, Excel status bar doesn't refresh to show/hide the ZoomSlider, until Excel is restarted.
In short, hiding zoom slider alone using VBA doesn't seem to be achievable.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/33980663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Use of regex to validate home address I'm new at regex and I'm trying to create some validation process using regex. I'm checking if the given home address or location address is valid. I'm also validating other values using regex but they all work just fine, but this one only checks the first character entered. Try entering other special characters at any part of the value, the function will return true.
I'm only allowing letters, numbers, periods, commas, and whitespace.
Here's the code [Excluded the working code and regex]:
document.querySelector(".review-input").addEventListener("click", reviewSubmittedForm);
function runValidationForAddress(stringToBeChecked) {
var regex = /^[a-zA-Z0-9\\,\\.\s]/g;
if (regex.test(stringToBeChecked) === true) {
//Valid
console.log("String is valid.");
return true;
}
else {
//Invalid
console.log("String is invalid! Please re-enter");
return false;
}
}
function reviewSubmittedForm() {
var addressInput = document.querySelector(".address-input");
if (runValidationForAddress(addressInput.value) === true) {
console.log("Pass");
}
else {
console.log("Denied");
}
}
<input type="text" class="address-input" value="1600 Amphitheatre Parkway">
<input type="button" class="review-input" value="Submit">
<p>
After submitting, try to copy and paste this other location addresses:<br><br>
@1600 Amphitheater Parkway<br>
1600 @Amphitheatre $Parkway<br>
1600 #Amphitheatre Parkway<br>
</p>
A: Use ^[\w\s ,.]+$ for your validation.
You can check it online at https://regex101.com/r/q6LoSE/4.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/62369830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Unique comparable points at Java TreeSet Having a very simple example of Java code:
import java.util.*;
class Point implements Comparable<Point>{
float x, y;
Point(float x_, float y_) { x = x_; y = y_; }
int compareTo(Point p_) { println(dist(x, y, p_.x, p_.y)); return dist(x, y, p_.x, p_.y) < 1E-4 ? -1 : 1; }
float dist(float x0_, float y0_, float x1_, float y1_) { return sqrt(pow(x1_ - x0_, 2) + pow(y1_ - y0_, 2)); }
}
public static void main(String[] args){
TreeSet<Point> tp = new TreeSet<Point>();
tp.add(new Point(0.125, 0.5));
tp.add(new Point(0.-125, 0.25));
tp.add(new Point(0.15, -0.75));
tp.add(new Point(0.125, 0.5));
System.out.println(tp);
}
And I've thought that every point would be unique, while there is a distance checking and if it's below epsilon code should remove duplicate.
But it's not work right keeping the fourth entry despite the fact that it is a duplicate of the first entry.
My intention is to get a set of unique points (vectors).
A:
This is so because the Set interface is defined in terms of the equals operation, but a TreeSet instance performs all element comparisons using its compareTo (or compare) method, so two elements that are deemed equal by this method are, from the standpoint of the set, equal. The behavior of a set is well-defined even if its ordering is inconsistent with equals; it just fails to obey the general contract of the Set interface.
https://docs.oracle.com/javase/7/docs/api/java/util/TreeSet.html
In other words, The TreeSet class does not maintain the uniqueness of its elements via the equals method but with the compareTo method of the class implementation, if the class implements Comparable, or with the compareTo of the given Comparator.
As others have pointed out in the comments, if you want to maintain uniqueness among your Points by their coordinates, you should override the equals and hashCode methods, as the general hashcode contract states and utilize a HashSet to store your Points.
In the following snippet, I've updated your class by implementing the equals, hashCode and toString methods to show how only three elements are printed instead of four.
public class Point implements Comparable<Point> {
double x, y;
public Point(double x, double y) {
this.x = x;
this.y = y;
}
// ... your implementation ...
@Override
public boolean equals(Object obj) {
if (obj == null) return false;
if (obj == this) return true;
if (obj.getClass() != getClass()) return false;
Point other = (Point) obj;
return x == other.x && y == other.y;
}
@Override
public int hashCode() {
return Objects.hash(x, y);
}
@Override
public String toString() {
return String.format("(%g;%g)", x, y);
}
public static void main(String[] args) {
HashSet<Point> tp = new HashSet<>();
tp.add(new Point(0.125, 0.5));
tp.add(new Point(0. - 125, 0.25));
tp.add(new Point(0.15, -0.75));
tp.add(new Point(0.125, 0.5));
System.out.println(tp);
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72239343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: AOT Compilation fails. Angular2 RC6 - Function call not supported - RouterModule.forChild(ROUTES) Environment:
Windows 10, IntelliJ 2016.2, node
*
*Angular version: 2.0.0-rc.6
*Language: [all | TypeScript X.X | ES6/7 | ES5]
Typescript ES6
*Node (for AoT issues): node --version =
Node 4.4.7, NPM 3.10.6
The AOT compiler fails, complaining about a function call or lamba reference. The only one is RouterModule.forChild(ROUTES), but previously it was able to compile with this reference. I dont see how the app can work without the imported component.
// /**
// * Angular 2 decorators and services
// */
// // import { BrowserModule } from '@angular/platform-browser'
//
import { CommonModule } from '@angular/common';
import { BrowserModule } from '@angular/platform-browser';
import { RouterModule } from '@angular/router';
import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
//
import { ROUTES } from './detail.routes';
/*
* Shared Utilities & Other Services
*/
import { Logging } from '../services/utility.service';
/**
* Imported Components
*/
import { DetailComponent } from './detail.component';
@NgModule({
declarations: [// Components / Directives/ Pipes
DetailComponent],
imports: [CommonModule, BrowserModule, FormsModule, RouterModule.forChild(ROUTES),]
})
export class DetailModule {
constructor() {
if (Logging.isEnabled.light) { console.log('%c Hello \"Detail\" component!', Logging.normal.lime); }
}
}
The error is
Error: Error encountered resolving symbol values statically. Function calls are not supported. Consider replacing the function or lambda with a reference to an exported function, resolving symbol DetailModule in C:/Source/POC/Microservice.Poc/src/app-components/+detail/index.ts, resolving symbol DetailModule in C:/Source/POC/Microservice.Poc/src/app-components/+detail/index.ts
at simplifyInContext (C:\Source\POC\Microservice.Poc\node_modules\@angular\compiler-cli\src\static_reflector.js:473:23)
at StaticReflector.simplify (C:\Source\POC\Microservice.Poc\node_modules\@angular\compiler-cli\src\static_reflector.js:476:22)
at StaticReflector.annotations (C:\Source\POC\Microservice.Poc\node_modules\@angular\compiler-cli\src\static_reflector.js:61:36)
at _loop_1 (C:\Source\POC\Microservice.Poc\node_modules\@angular\compiler-cli\src\codegen.js:53:54)
at CodeGenerator.readFileMetadata (C:\Source\POC\Microservice.Poc\node_modules\@angular\compiler-cli\src\codegen.js:66:13)
at C:\Source\POC\Microservice.Poc\node_modules\@angular\compiler-cli\src\codegen.js:100:74
at Array.map (native)
at CodeGenerator.codegen (C:\Source\POC\Microservice.Poc\node_modules\@angular\compiler-cli\src\codegen.js:100:35)
at codegen (C:\Source\POC\Microservice.Poc\node_modules\@angular\compiler-cli\src\main.js:7:81)
at Object.main (C:\Source\POC\Microservice.Poc\node_modules\@angular\tsc-wrapped\src\main.js:30:16)
Compilation failed
Why is RouterModule.forChild(ROUTES) making AOT compilation fail, when over here at this repo, it works fine with AOT compilation?
A: The problem was solved by switching from RC6 builds to the github builds:
This:
"@angular/compiler-cli": "github:angular/compiler-cli-builds",
"@angular/common": "2.0.0-rc.6",
"@angular/compiler": "2.0.0-rc.6",
"@angular/core": "2.0.0-rc.6",
"@angular/forms": "^2.0.0-rc.6",
"@angular/http": "2.0.0-rc.6",
"@angular/platform-browser": "2.0.0-rc.6",
"@angular/platform-browser-dynamic": "2.0.0-rc.6",
"@angular/platform-server": "2.0.0-rc.6",
"@angular/router": "3.0.0-rc.2",
Was changed to
"@angular/common": "github:angular/common-builds",
"@angular/compiler": "github:angular/compiler-builds",
"@angular/compiler-cli": "github:angular/compiler-cli-builds",
"@angular/core": "github:angular/core-builds",
"@angular/forms": "github:angular/forms-builds",
"@angular/http": "github:angular/http-builds",
"@angular/platform-browser": "github:angular/platform-browser-builds",
"@angular/platform-browser-dynamic": "github:angular/platform-browser-dynamic-builds",
"@angular/platform-server": "github:angular/platform-server-builds",
"@angular/router": "github:angular/router-builds",
"@angular/tsc-wrapped": "github:angular/tsc-wrapped-builds",
And it compiles fine now.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/39475046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Adding an ID or class to Marker using markerwithlabel class var markers = new MarkerWithLabel({
position: point,
icon: imagePath1,
map: map,
draggable: false,
labelContent: image_count,
labelAnchor: new google.maps.Point(10, 30),
labelClass: "labels", // the CSS class for the label
labelInBackground: false,
id: "markerId",
title : contentVal,
});
alert("Id is::"+$("#"+markers.id).val());
When I try to get Id, it is undefined
A: I too hit this roadblock. I reviewed the source code for MarkerWithLabel, and it does not support setting the ID or Class.
But you can add support by changing just 3 lines of code. Below is the modified code, with my added "ID" support. It works as normal, but if you add a .id property when creating a new MarkerWithLabel (as you did above), it will properly set the ID.
New markerswithlabel.js code β Added ID Support
/**
* @name MarkerWithLabel for V3
* @version 1.1.10 [April 8, 2014]
* @author Gary Little (inspired by code from Marc Ridey of Google).
* @copyright Copyright 2012 Gary Little [gary at luxcentral.com]
* @fileoverview MarkerWithLabel extends the Google Maps JavaScript API V3
* <code>google.maps.Marker</code> class.
* <p>
* MarkerWithLabel allows you to define markers with associated labels. As you would expect,
* if the marker is draggable, so too will be the label. In addition, a marker with a label
* responds to all mouse events in the same manner as a regular marker. It also fires mouse
* events and "property changed" events just as a regular marker would. Version 1.1 adds
* support for the raiseOnDrag feature introduced in API V3.3.
* <p>
* If you drag a marker by its label, you can cancel the drag and return the marker to its
* original position by pressing the <code>Esc</code> key. This doesn't work if you drag the marker
* itself because this feature is not (yet) supported in the <code>google.maps.Marker</code> class.
*/
/*!
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*jslint browser:true */
/*global document,google */
/**
* @param {Function} childCtor Child class.
* @param {Function} parentCtor Parent class.
* @private
*/
function inherits(childCtor, parentCtor) {
/* @constructor */
function tempCtor() {}
tempCtor.prototype = parentCtor.prototype;
childCtor.superClass_ = parentCtor.prototype;
childCtor.prototype = new tempCtor();
/* @override */
childCtor.prototype.constructor = childCtor;
}
/**
* This constructor creates a label and associates it with a marker.
* It is for the private use of the MarkerWithLabel class.
* @constructor
* @param {Marker} marker The marker with which the label is to be associated.
* @param {string} crossURL The URL of the cross image =.
* @param {string} handCursor The URL of the hand cursor.
* @private
*/
function MarkerLabel_(id, marker, crossURL, handCursorURL) {
this.marker_ = marker;
this.handCursorURL_ = marker.handCursorURL;
this.labelDiv_ = document.createElement("div");
//////////////////////////////////////////////////////////
// New - ID support
//////////////////////////////////////////////////////////
if (typeof id !== 'undefined') this.labelDiv_.id = id;
this.labelDiv_.style.cssText = "position: absolute; overflow: hidden;";
// Get the DIV for the "X" to be displayed when the marker is raised.
this.crossDiv_ = MarkerLabel_.getSharedCross(crossURL);
}
inherits(MarkerLabel_, google.maps.OverlayView);
/**
* Returns the DIV for the cross used when dragging a marker when the
* raiseOnDrag parameter set to true. One cross is shared with all markers.
* @param {string} crossURL The URL of the cross image =.
* @private
*/
MarkerLabel_.getSharedCross = function (crossURL) {
var div;
if (typeof MarkerLabel_.getSharedCross.crossDiv === "undefined") {
div = document.createElement("img");
div.style.cssText = "position: absolute; z-index: 1000002; display: none;";
// Hopefully Google never changes the standard "X" attributes:
div.style.marginLeft = "-8px";
div.style.marginTop = "-9px";
div.src = crossURL;
MarkerLabel_.getSharedCross.crossDiv = div;
}
return MarkerLabel_.getSharedCross.crossDiv;
};
/**
* Adds the DIV representing the label to the DOM. This method is called
* automatically when the marker's <code>setMap</code> method is called.
* @private
*/
MarkerLabel_.prototype.onAdd = function () {
var me = this;
var cMouseIsDown = false;
var cDraggingLabel = false;
var cSavedZIndex;
var cLatOffset, cLngOffset;
var cIgnoreClick;
var cRaiseEnabled;
var cStartPosition;
var cStartCenter;
// Constants:
var cRaiseOffset = 20;
var cDraggingCursor = "url(" + this.handCursorURL_ + ")";
// Stops all processing of an event.
//
var cAbortEvent = function (e) {
if (e.preventDefault) {
e.preventDefault();
}
e.cancelBubble = true;
if (e.stopPropagation) {
e.stopPropagation();
}
};
var cStopBounce = function () {
me.marker_.setAnimation(null);
};
this.getPanes().overlayMouseTarget.appendChild(this.labelDiv_);
// One cross is shared with all markers, so only add it once:
if (typeof MarkerLabel_.getSharedCross.processed === "undefined") {
this.getPanes().overlayMouseTarget.appendChild(this.crossDiv_);
MarkerLabel_.getSharedCross.processed = true;
}
this.listeners_ = [
google.maps.event.addDomListener(this.labelDiv_, "mouseover", function (e) {
if (me.marker_.getDraggable() || me.marker_.getClickable()) {
this.style.cursor = "pointer";
google.maps.event.trigger(me.marker_, "mouseover", e);
}
}),
google.maps.event.addDomListener(this.labelDiv_, "mouseout", function (e) {
if ((me.marker_.getDraggable() || me.marker_.getClickable()) && !cDraggingLabel) {
this.style.cursor = me.marker_.getCursor();
google.maps.event.trigger(me.marker_, "mouseout", e);
}
}),
google.maps.event.addDomListener(this.labelDiv_, "mousedown", function (e) {
cDraggingLabel = false;
if (me.marker_.getDraggable()) {
cMouseIsDown = true;
this.style.cursor = cDraggingCursor;
}
if (me.marker_.getDraggable() || me.marker_.getClickable()) {
google.maps.event.trigger(me.marker_, "mousedown", e);
cAbortEvent(e); // Prevent map pan when starting a drag on a label
}
}),
google.maps.event.addDomListener(document, "mouseup", function (mEvent) {
var position;
if (cMouseIsDown) {
cMouseIsDown = false;
me.eventDiv_.style.cursor = "pointer";
google.maps.event.trigger(me.marker_, "mouseup", mEvent);
}
if (cDraggingLabel) {
if (cRaiseEnabled) { // Lower the marker & label
position = me.getProjection().fromLatLngToDivPixel(me.marker_.getPosition());
position.y += cRaiseOffset;
me.marker_.setPosition(me.getProjection().fromDivPixelToLatLng(position));
// This is not the same bouncing style as when the marker portion is dragged,
// but it will have to do:
try { // Will fail if running Google Maps API earlier than V3.3
me.marker_.setAnimation(google.maps.Animation.BOUNCE);
setTimeout(cStopBounce, 1406);
} catch (e) {}
}
me.crossDiv_.style.display = "none";
me.marker_.setZIndex(cSavedZIndex);
cIgnoreClick = true; // Set flag to ignore the click event reported after a label drag
cDraggingLabel = false;
mEvent.latLng = me.marker_.getPosition();
google.maps.event.trigger(me.marker_, "dragend", mEvent);
}
}),
google.maps.event.addListener(me.marker_.getMap(), "mousemove", function (mEvent) {
var position;
if (cMouseIsDown) {
if (cDraggingLabel) {
// Change the reported location from the mouse position to the marker position:
mEvent.latLng = new google.maps.LatLng(mEvent.latLng.lat() - cLatOffset, mEvent.latLng.lng() - cLngOffset);
position = me.getProjection().fromLatLngToDivPixel(mEvent.latLng);
if (cRaiseEnabled) {
me.crossDiv_.style.left = position.x + "px";
me.crossDiv_.style.top = position.y + "px";
me.crossDiv_.style.display = "";
position.y -= cRaiseOffset;
}
me.marker_.setPosition(me.getProjection().fromDivPixelToLatLng(position));
if (cRaiseEnabled) { // Don't raise the veil; this hack needed to make MSIE act properly
me.eventDiv_.style.top = (position.y + cRaiseOffset) + "px";
}
google.maps.event.trigger(me.marker_, "drag", mEvent);
} else {
// Calculate offsets from the click point to the marker position:
cLatOffset = mEvent.latLng.lat() - me.marker_.getPosition().lat();
cLngOffset = mEvent.latLng.lng() - me.marker_.getPosition().lng();
cSavedZIndex = me.marker_.getZIndex();
cStartPosition = me.marker_.getPosition();
cStartCenter = me.marker_.getMap().getCenter();
cRaiseEnabled = me.marker_.get("raiseOnDrag");
cDraggingLabel = true;
me.marker_.setZIndex(1000000); // Moves the marker & label to the foreground during a drag
mEvent.latLng = me.marker_.getPosition();
google.maps.event.trigger(me.marker_, "dragstart", mEvent);
}
}
}),
google.maps.event.addDomListener(document, "keydown", function (e) {
if (cDraggingLabel) {
if (e.keyCode === 27) { // Esc key
cRaiseEnabled = false;
me.marker_.setPosition(cStartPosition);
me.marker_.getMap().setCenter(cStartCenter);
google.maps.event.trigger(document, "mouseup", e);
}
}
}),
google.maps.event.addDomListener(this.labelDiv_, "click", function (e) {
if (me.marker_.getDraggable() || me.marker_.getClickable()) {
if (cIgnoreClick) { // Ignore the click reported when a label drag ends
cIgnoreClick = false;
} else {
google.maps.event.trigger(me.marker_, "click", e);
cAbortEvent(e); // Prevent click from being passed on to map
}
}
}),
google.maps.event.addDomListener(this.labelDiv_, "dblclick", function (e) {
if (me.marker_.getDraggable() || me.marker_.getClickable()) {
google.maps.event.trigger(me.marker_, "dblclick", e);
cAbortEvent(e); // Prevent map zoom when double-clicking on a label
}
}),
google.maps.event.addListener(this.marker_, "dragstart", function (mEvent) {
if (!cDraggingLabel) {
cRaiseEnabled = this.get("raiseOnDrag");
}
}),
google.maps.event.addListener(this.marker_, "drag", function (mEvent) {
if (!cDraggingLabel) {
if (cRaiseEnabled) {
me.setPosition(cRaiseOffset);
// During a drag, the marker's z-index is temporarily set to 1000000 to
// ensure it appears above all other markers. Also set the label's z-index
// to 1000000 (plus or minus 1 depending on whether the label is supposed
// to be above or below the marker).
me.labelDiv_.style.zIndex = 1000000 + (this.get("labelInBackground") ? -1 : +1);
}
}
}),
google.maps.event.addListener(this.marker_, "dragend", function (mEvent) {
if (!cDraggingLabel) {
if (cRaiseEnabled) {
me.setPosition(0); // Also restores z-index of label
}
}
}),
google.maps.event.addListener(this.marker_, "position_changed", function () {
me.setPosition();
}),
google.maps.event.addListener(this.marker_, "zindex_changed", function () {
me.setZIndex();
}),
google.maps.event.addListener(this.marker_, "visible_changed", function () {
me.setVisible();
}),
google.maps.event.addListener(this.marker_, "labelvisible_changed", function () {
me.setVisible();
}),
google.maps.event.addListener(this.marker_, "title_changed", function () {
me.setTitle();
}),
google.maps.event.addListener(this.marker_, "labelcontent_changed", function () {
me.setContent();
}),
google.maps.event.addListener(this.marker_, "labelanchor_changed", function () {
me.setAnchor();
}),
google.maps.event.addListener(this.marker_, "labelclass_changed", function () {
me.setStyles();
}),
google.maps.event.addListener(this.marker_, "labelstyle_changed", function () {
me.setStyles();
})
];
};
/**
* Removes the DIV for the label from the DOM. It also removes all event handlers.
* This method is called automatically when the marker's <code>setMap(null)</code>
* method is called.
* @private
*/
MarkerLabel_.prototype.onRemove = function () {
var i;
this.labelDiv_.parentNode.removeChild(this.labelDiv_);
// Remove event listeners:
for (i = 0; i < this.listeners_.length; i++) {
google.maps.event.removeListener(this.listeners_[i]);
}
};
/**
* Draws the label on the map.
* @private
*/
MarkerLabel_.prototype.draw = function () {
this.setContent();
this.setTitle();
this.setStyles();
};
/**
* Sets the content of the label.
* The content can be plain text or an HTML DOM node.
* @private
*/
MarkerLabel_.prototype.setContent = function () {
var content = this.marker_.get("labelContent");
if (typeof content.nodeType === "undefined") {
this.labelDiv_.innerHTML = content;
} else {
this.labelDiv_.innerHTML = ""; // Remove current content
this.labelDiv_.appendChild(content);
}
};
/**
* Sets the content of the tool tip for the label. It is
* always set to be the same as for the marker itself.
* @private
*/
MarkerLabel_.prototype.setTitle = function () {
this.labelDiv_.title = this.marker_.getTitle() || "";
};
/**
* Sets the style of the label by setting the style sheet and applying
* other specific styles requested.
* @private
*/
MarkerLabel_.prototype.setStyles = function () {
var i, labelStyle;
// Apply style values from the style sheet defined in the labelClass parameter:
this.labelDiv_.className = this.marker_.get("labelClass");
// Clear existing inline style values:
this.labelDiv_.style.cssText = "";
// Apply style values defined in the labelStyle parameter:
labelStyle = this.marker_.get("labelStyle");
for (i in labelStyle) {
if (labelStyle.hasOwnProperty(i)) {
this.labelDiv_.style[i] = labelStyle[i];
}
}
this.setMandatoryStyles();
};
/**
* Sets the mandatory styles to the DIV representing the label as well as to the
* associated event DIV. This includes setting the DIV position, z-index, and visibility.
* @private
*/
MarkerLabel_.prototype.setMandatoryStyles = function () {
this.labelDiv_.style.position = "absolute";
this.labelDiv_.style.overflow = "hidden";
// Make sure the opacity setting causes the desired effect on MSIE:
if (typeof this.labelDiv_.style.opacity !== "undefined" && this.labelDiv_.style.opacity !== "") {
this.labelDiv_.style.MsFilter = "\"progid:DXImageTransform.Microsoft.Alpha(opacity=" + (this.labelDiv_.style.opacity * 100) + ")\"";
this.labelDiv_.style.filter = "alpha(opacity=" + (this.labelDiv_.style.opacity * 100) + ")";
}
this.setAnchor();
this.setPosition(); // This also updates z-index, if necessary.
this.setVisible();
};
/**
* Sets the anchor point of the label.
* @private
*/
MarkerLabel_.prototype.setAnchor = function () {
var anchor = this.marker_.get("labelAnchor");
this.labelDiv_.style.marginLeft = -anchor.x + "px";
this.labelDiv_.style.marginTop = -anchor.y + "px";
};
/**
* Sets the position of the label. The z-index is also updated, if necessary.
* @private
*/
MarkerLabel_.prototype.setPosition = function (yOffset) {
var position = this.getProjection().fromLatLngToDivPixel(this.marker_.getPosition());
if (typeof yOffset === "undefined") {
yOffset = 0;
}
this.labelDiv_.style.left = Math.round(position.x) + "px";
this.labelDiv_.style.top = Math.round(position.y - yOffset) + "px";
this.setZIndex();
};
/**
* Sets the z-index of the label. If the marker's z-index property has not been defined, the z-index
* of the label is set to the vertical coordinate of the label. This is in keeping with the default
* stacking order for Google Maps: markers to the south are in front of markers to the north.
* @private
*/
MarkerLabel_.prototype.setZIndex = function () {
var zAdjust = (this.marker_.get("labelInBackground") ? -1 : +1);
if (typeof this.marker_.getZIndex() === "undefined") {
this.labelDiv_.style.zIndex = parseInt(this.labelDiv_.style.top, 10) + zAdjust;
} else {
this.labelDiv_.style.zIndex = this.marker_.getZIndex() + zAdjust;
}
};
/**
* Sets the visibility of the label. The label is visible only if the marker itself is
* visible (i.e., its visible property is true) and the labelVisible property is true.
* @private
*/
MarkerLabel_.prototype.setVisible = function () {
if (this.marker_.get("labelVisible")) {
this.labelDiv_.style.display = this.marker_.getVisible() ? "block" : "none";
} else {
this.labelDiv_.style.display = "none";
}
};
/**
* @name MarkerWithLabelOptions
* @class This class represents the optional parameter passed to the {@link MarkerWithLabel} constructor.
* The properties available are the same as for <code>google.maps.Marker</code> with the addition
* of the properties listed below. To change any of these additional properties after the labeled
* marker has been created, call <code>google.maps.Marker.set(propertyName, propertyValue)</code>.
* <p>
* When any of these properties changes, a property changed event is fired. The names of these
* events are derived from the name of the property and are of the form <code>propertyname_changed</code>.
* For example, if the content of the label changes, a <code>labelcontent_changed</code> event
* is fired.
* <p>
* @property {string|Node} [labelContent] The content of the label (plain text or an HTML DOM node).
* @property {Point} [labelAnchor] By default, a label is drawn with its anchor point at (0,0) so
* that its top left corner is positioned at the anchor point of the associated marker. Use this
* property to change the anchor point of the label. For example, to center a 50px-wide label
* beneath a marker, specify a <code>labelAnchor</code> of <code>google.maps.Point(25, 0)</code>.
* (Note: x-values increase to the right and y-values increase to the top.)
* @property {string} [labelClass] The name of the CSS class defining the styles for the label.
* Note that style values for <code>position</code>, <code>overflow</code>, <code>top</code>,
* <code>left</code>, <code>zIndex</code>, <code>display</code>, <code>marginLeft</code>, and
* <code>marginTop</code> are ignored; these styles are for internal use only.
* @property {Object} [labelStyle] An object literal whose properties define specific CSS
* style values to be applied to the label. Style values defined here override those that may
* be defined in the <code>labelClass</code> style sheet. If this property is changed after the
* label has been created, all previously set styles (except those defined in the style sheet)
* are removed from the label before the new style values are applied.
* Note that style values for <code>position</code>, <code>overflow</code>, <code>top</code>,
* <code>left</code>, <code>zIndex</code>, <code>display</code>, <code>marginLeft</code>, and
* <code>marginTop</code> are ignored; these styles are for internal use only.
* @property {boolean} [labelInBackground] A flag indicating whether a label that overlaps its
* associated marker should appear in the background (i.e., in a plane below the marker).
* The default is <code>false</code>, which causes the label to appear in the foreground.
* @property {boolean} [labelVisible] A flag indicating whether the label is to be visible.
* The default is <code>true</code>. Note that even if <code>labelVisible</code> is
* <code>true</code>, the label will <i>not</i> be visible unless the associated marker is also
* visible (i.e., unless the marker's <code>visible</code> property is <code>true</code>).
* @property {boolean} [raiseOnDrag] A flag indicating whether the label and marker are to be
* raised when the marker is dragged. The default is <code>true</code>. If a draggable marker is
* being created and a version of Google Maps API earlier than V3.3 is being used, this property
* must be set to <code>false</code>.
* @property {boolean} [optimized] A flag indicating whether rendering is to be optimized for the
* marker. <b>Important: The optimized rendering technique is not supported by MarkerWithLabel,
* so the value of this parameter is always forced to <code>false</code>.
* @property {string} [crossImage="http://maps.gstatic.com/intl/en_us/mapfiles/drag_cross_67_16.png"]
* The URL of the cross image to be displayed while dragging a marker.
* @property {string} [handCursor="http://maps.gstatic.com/intl/en_us/mapfiles/closedhand_8_8.cur"]
* The URL of the cursor to be displayed while dragging a marker.
*/
/**
* Creates a MarkerWithLabel with the options specified in {@link MarkerWithLabelOptions}.
* @constructor
* @param {MarkerWithLabelOptions} [opt_options] The optional parameters.
*/
function MarkerWithLabel(opt_options) {
opt_options = opt_options || {};
opt_options.labelContent = opt_options.labelContent || "";
opt_options.labelAnchor = opt_options.labelAnchor || new google.maps.Point(0, 0);
opt_options.labelClass = opt_options.labelClass || "markerLabels";
opt_options.labelStyle = opt_options.labelStyle || {};
opt_options.labelInBackground = opt_options.labelInBackground || false;
if (typeof opt_options.labelVisible === "undefined") {
opt_options.labelVisible = true;
}
if (typeof opt_options.raiseOnDrag === "undefined") {
opt_options.raiseOnDrag = true;
}
if (typeof opt_options.clickable === "undefined") {
opt_options.clickable = true;
}
if (typeof opt_options.draggable === "undefined") {
opt_options.draggable = false;
}
if (typeof opt_options.optimized === "undefined") {
opt_options.optimized = false;
}
opt_options.crossImage = opt_options.crossImage || "http" + (document.location.protocol === "https:" ? "s" : "") + "://maps.gstatic.com/intl/en_us/mapfiles/drag_cross_67_16.png";
opt_options.handCursor = opt_options.handCursor || "http" + (document.location.protocol === "https:" ? "s" : "") + "://maps.gstatic.com/intl/en_us/mapfiles/closedhand_8_8.cur";
opt_options.optimized = false; // Optimized rendering is not supported
//////////////////////////////////////////////////////////
// New
//////////////////////////////////////////////////////////
this.label = new MarkerLabel_(opt_options.id, this, opt_options.crossImage, opt_options.handCursor); // Bind the label to the marker
// Call the parent constructor. It calls Marker.setValues to initialize, so all
// the new parameters are conveniently saved and can be accessed with get/set.
// Marker.set triggers a property changed event (called "propertyname_changed")
// that the marker label listens for in order to react to state changes.
google.maps.Marker.apply(this, arguments);
}
inherits(MarkerWithLabel, google.maps.Marker);
/**
* Overrides the standard Marker setMap function.
* @param {Map} theMap The map to which the marker is to be added.
* @private
*/
MarkerWithLabel.prototype.setMap = function (theMap) {
// Call the inherited function...
google.maps.Marker.prototype.setMap.apply(this, arguments);
// ... then deal with the label:
this.label.setMap(theMap);
};
If you would like to compress/minify/uglify the code, I recommend https://javascript-minifier.com/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/20853678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Calling the contents of a cell inside a SUMIF function I have a budgeting spreadsheet where I am trying to get the sum of cells in col A if the string in Col B is a certain keyword. For example, one of my formulas is =SUMIF(D13:D36,"Restaurant",C13:C36), so if any of cells in Col D are Restaurant, it takes the amount in Col C and adds them up. I have about 10 of the same formula, each with a unique string/keyword. This function works fine.
My problem is when I add a lot of entries and my columns get longer than the code specifies, I need to go into each formula separately and update the cells (eg, change C13:C36 to C13:C45, but 10 times).
If I enter "C13" into a cell, for this example T1, is it possible to call the contents in cell T1 within the SUMIF function? So the function would look something like =SUMIF(D13:D36,"Restaurant",CELL(T1):C36). I know the CELL function doesn't work here but is there something that could?
What I am trying to do is write the start and ending cells somewhere in my sheet then call them inside the SUMIF function, so if I need to change them later I only need to update 4 cells rather than 10+.
A: First as stated in the comments, there is no cost to using full columns:
=SUMIF(D:D,"Restaurant",C:C)
Which now it does not matter how large it gets.
excel
But if one wants to limit it using other cells, I would use INDEX, instead of INDIRECT as INDIRECT is volatile(This only works in Excel):
=SUMIF(INDEX(D:D,T1):INDEX(D:D,T2),"Restaurant",INDEX(C:C,T1):INDEX(C:C,T2))
Where Cell T1 holds the start row and T2 holds the end row.
A: you could use simple QUERY like:
=QUERY({C13:D}, "select Col2, sum(Col1)
where Col2 matches 'Restaurant'
group by Col2
label sum(Col1)''", 0)
or for the whole group:
=QUERY({C13:D}, "select Col2, sum(Col1)
where Col2 is not null
group by Col2
label sum(Col1)''", 0)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/56138997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Vue: what is the difference of having a root component vs. just put components in a div root? I don't understand the difference in approaches of loading up an initial Vue instance. Right now, I do the following in my app:
new Vue({
el: '#app',
components: {
// component names
},
data() {
return {
}
}
});
index.html
<div id="app">
<div>
<my-component1></my-component1>
<my-component2></my-component2>
</div>
</div>
but then how this way differ? I know it would load up a App within the html <div id="app"></div> tag but how does that approach differ?
new Vue({
render: h => h(App)
}).$mount('#app')
A: One of the primary benefits of offloading the base component to the App component (your 2nd example, with the render function) is to clearly separate the processes of app instantiation / entry from the details of the base component.
For smaller projects, or when using the CDN, this might not seem necessary. But in larger projects with Vue CLI, main.js can become lengthy and it becomes increasingly difficult to combine both the app instantiation and the root component into one file.
Without that separation, main.js would serve a double purpose of both loading the app and creating a component.
Generally speaking, it's good practice to separate unrelated functionality in projects into separate files for easier maintenance and collaboration, and better clarity.
A: If neither render function nor template option is present, the in-DOM HTML of the mounting DOM element will be extracted as the template. In this case, Runtime + Compiler build of Vue should be used.
---- https://v2.vuejs.org/v2/api/index.html#el
Your 1nd example needs a Compiler to compile html in runtime.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/66346236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Does both a CloseableHttpClient and CloseableHttpResponse need to be closed explicitly I have the following method, which issues a HTTP POST request.
It returns a CloseableHttpResponse so any code calling it can obtain status code, response body etc.
I'm trying to understand, does both the:
*
*CloseableHttpClient client
*CloseableHttpResponse closeableHttpResponse
need to be closed? Or does closing one, close the other?
..
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.impl.client.CloseableHttpClient;
public static CloseableHttpResponse post(final String restEndpoint, final String data, final Header[] headers) throws Exception {
final URIBuilder uriBuilder = new URIBuilder(restEndpoint);
final HttpPost httpPost = new HttpPost(uriBuilder.build());
httpPost.setHeader("Accept", "application/json");
httpPost.setHeader("Cache-Control", "no-cache, no-store");
if (data != null) {
httpPost.setEntity(new StringEntity(data));
}
if (headers != null) {
for (Header header : headers) {
httpPost.setHeader(header);
}
}
final CloseableHttpClient client = HttpClients.custom()
.setSSLSocketFactory(createSSLFactory())
.setSSLHostnameVerifier(new NoopHostnameVerifier())
.build();
final CloseableHttpResponse closeableHttpResponse = client.execute(httpPost);
final int statusCode = closeableHttpResponse.getStatusLine().getStatusCode();
logger.debug(Optional.empty(), statusCode, httpPost.toString());
return closeableHttpResponse;
}
A: I think the CloseableHttpResponse will need to be closed manually every individual request.
There are a couple ways of doing that.
With a try/catch/finally block:
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.utils.URIBuilder;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import java.io.IOException;
public class CloseableHttpClientWithTryCatchFinally {
public static void main(String... args) throws Exception {
URIBuilder uriBuilder = new URIBuilder("https://www.google.com/");
HttpGet httpGet = new HttpGet(uriBuilder.build());
CloseableHttpClient client = HttpClients.custom().build();
CloseableHttpResponse response = null;
try {
response = client.execute(httpGet);
response.getEntity().writeTo(System.out);
} catch (IOException e) {
System.out.println("Exception: " + e);
e.printStackTrace();
} finally {
if (response != null) {
response.close();
}
}
}
}
I think a better answer is to use the try-with-resources statement:
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.utils.URIBuilder;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import java.io.IOException;
public class CloseableHttpClientTryWithResources {
public static void main(String... args) throws Exception {
URIBuilder uriBuilder = new URIBuilder("https://www.google.com/");
HttpGet httpGet = new HttpGet(uriBuilder.build());
CloseableHttpClient client = HttpClients.custom().build();
try (CloseableHttpResponse response = client.execute(httpGet)) {
response.getEntity().writeTo(System.out);
} catch (IOException e) {
System.out.println("Exception: " + e);
e.printStackTrace();
}
}
}
In general, I have seen people just create one CloseableHttpClient for their application and then just reuse that instance throughout their app. If you are only going to be using one, or a few instances, over and over, then no, I don't think you should have to close them. However, if you are going to be creating new instances of CloseableHttpClient over and over, then yes, you will need to close them.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/58070374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: add space to top and bottom of ListView so top and bottom item can be centered My ListView items need to be centered when they are selected, but the top and bottom items stop at the top or bottom of the list. How do I center the top and bottom items?
Here's my test layout:
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/parent"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:clipChildren="false"
android:clipToPadding="false"
android:gravity="top" >
<ListView
android:id="@+id/listview"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerVertical="true"
android:clipChildren="false"
android:clipToPadding="false" >
</ListView>
</RelativeLayout>
A: I couldn't find a solution to the clipping issue so I went with adding a header and footer to the ListView. I changed my ListView height back to match_parent and then, using the post method on my ListView's parent view, I was able to calculate the height of my header and footer like so:
// this call should be included in the onCreate method
// the post method is called after parentView is ready to be added
// so the parentView's height and width can be used
parentView.post(addSpace(parentView, myListView));
private Runnable addSpace(final RelativeLayout parent, final ListView list) {
return new Runnable(){
public void run() {
// create list adapter here or in the onCreate method
// R.layout.list_item refers to my custom xml layout file
adapter = new ArrayAdapter<String>(MainActivity.this, R.layout.list_item, values);
// calculate size and add header and footer BEFORE applying adapter
// listItemHeight was calculated previously
spacerSize = (parent.getHeight()/2) - (listItemHeight/2);
list.addHeaderView(getSpacer(), null, false);
list.addFooterView(getSpacer(), null, false);
list.setAdapter(adapter);
}
}
}
// I used a function to create my spacers
private LinearLayout getSpacer(){
// I used padding instead of LayoutParams thinking it would be easier to
// change in the future. It wasn't, but still made resizing my spacer easy
LinearLayout spacer = new LinearLayout(MainActivity.this);
spacer.setOrientation(LinearLayout.HORIZONTAL);
spacer.setPadding(parentView.getWidth(), spacerSize, 0, 0);
return spacer;
}
Using this method I was able to add the proper spacing to the top and bottom of my ListView so when I scrolled all the way to the top the top item appeared in the center of my view.
PROBLEM: My app allows the user to hide the bottom interface element which should make my parent much taller. I am working out a way to make my header and footer taller dynamically but I may need to simply remove the ListView and recreate it with a larger header and footer.
I hope this helps someone from hours of frustration :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/13812695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to pass the dynamic column names in a stored procedure --Drop the table if exists
drop proc if exists test_sp
use testdb
go
create procedure test_sp
(@metric varchar(50) = NULL,
@from_date date = NULL,
@to_date date = NULL)
as
begin
set nocount on;
--declaring the column name type
declare @column_name decimal(25,2)
--specifying the column name based on the metric provided
if @metric = 'Sales'
begin
@column_name = 'sales_value'
end
else
begin
@column_name = 'revenue_value'
end
--sum the value based on the metric
select sum(@column_name)
from <dataset>
where metric = @metric
end
-- execute the procedure
exec test_sp @metric = 'sales'
A: As an alternative to dynamic sql you can use a case expression. This will make the entirety of your procedure this simple.
create procedure test_sp
(
@metric varchar(50) = NULL
,@from_date date =NULL
,@to_date date =Null
)
AS
BEGIN
SET NOCOUNT ON;
select sum(case when @metric = 'Sales' then sales_value else revenue_value end) from <dataset> where metric = @metric
END
A: You need dynamic SQL here. You can't use a variable for an object (table name, column name, etc) without it.
...
declare @sql varchar(max)
--sum the value based on the metric
set @sql = 'select sum(' + @column_name + ') from <dataset> where metric = ' + @metric
print(@sql) --this is what will be executed in when you uncomment the command below
--exec (@sql)
end
--execute the procedure
exec test_sp @metric ='sales'
But, you could eliminate it all together... and shorten your steps
use testdb
go
create procedure test_sp
(
@metric varchar(50) = NULL
,@from_date date =NULL
,@to_date date =Null
)
AS
BEGIN
SET NOCOUNT ON;
--specifying the column name based on the metric provided
if @metric = 'Sales'
begin
select sum('sales_value')
from yourTable
where metric = @metric --is this really needed?
end
else
begin
select sum('revenue_value')
from yourTable
where metric = @metric --is this really needed?
end
Also, not sure what @from_date and @to_date are for since you don't use them in your procedure. I imagine they are going in the WHERE clause eventually.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/50992077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Effective way to generate random strings up to 100MB for test in PHP? For testing purposes I need a function like this:
/**
* @param int $sizeInBytes
*
* @returns string with random data
*/
function randomData($sizeInBytes)
{
...
}
Any ideas of a effective implementation? There is need for speed but not for real randomness (more a kind of "lorem ipsum"). My simplest idea would be to use real large file in the filesystem and fetch the required size by stream. But this needs at least a 100MB file. Is there a better way?
A: If you are in a Unix environment, you can use the file /dev/random to pull as many megabytes as you want from it.
A: How about just creating a very long string if you have the memory available anyways?
That should not take all that long :)
$x = str_repeat(
'Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque sollicitudin turpis ut augue lacinia at ullamcorper dolor condimentum. Nunc elementum suscipit laoreet. Phasellus vel sem justo, a vulputate arcu. Sed rutrum elit nec elit lobortis ultrices. Quisque elit nulla, rutrum et varius sit amet, pulvinar eget purus. Aliquam erat volutpat. Fusce turpis lectus, vestibulum sed ornare sed, facilisis sit amet lacus. Nunc lobortis posuere ultricies. Phasellus aliquet cursus gravida. Curabitur eu erat ac augue rutrum mattis. Suspendisse sit amet urna nec velit commodo feugiat. Maecenas vulputate dictum diam, eu tempor erat volutpat in. Donec id nulla tortor, nec iaculis nibh. Pellentesque scelerisque nisl sit amet ligula dictum commodo. Donec porta mi in lorem porttitor id suscipit lacus auctor.',
125000
);
You could of course just write that to a file one but creating it in memory doesn't really take all that long.
The code above produces an 98MB string in about 100ms and creating a 200MB string takes about 170ms on my box. That should be good enough for most cases.
As noted in the comment below: You might have to change your php.ini setting if you limit the amount of memory your script is allowed to consume (or change it via memory_limit('...');). Also strings > 1.5gb might cause issues but thats not a concern here I'd say.
A: Well, if you want random text, then you can use a dictionary with the words and another dictionary with the punctuation, and then generate the return string with random elements from the word dictionary, with a certain probability of random elements from the punctuation dictionary.
Like this you only need the dictionary's in memory, but it will be heavier on the server's CPU.
You can also use this method in conjunction with what you purposed, having a small dictionary with sentences, and randomly selecting sentences or even paragraphs.
A: Why not use an actual lipsum generator script, such as this one?
A: On linux you could read from /dev/random and I'm not fully sure but you could use http://php.net/manual/en/function.fseek.php to read a certain amount or use http://php.net/manual/en/function.exec.php to create the file and then read it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/8702100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Get data from stdclass Object - Count from mysql I was looking for over 3 hours on the internet how to get data from stdclass Object and none of these solution I found worked.
So what I have is simple mysql query
$park = $wpdb->get_row("SELECT COUNT(1) FROM wp_richreviews WHERE review_status='1'");
And then print it
if($park)
{
print_r($park);
}
Then it will show this
stdClass Object ( [COUNT(1)] => 2 )
But I want to recieve just "2" and not the stdclass object bla bla..
can anyone help me please ? thank you!
A: Change your query to:
$park = $wpdb->get_row("SELECT COUNT(1) as count
FROM wp_richreviews WHERE review_status='1'");
and get count value with something like $park['count'];
A: You have made life a little difficult for yourself by not giving the result column a nice easily accessible name
If you change your query so the column has a known name like this
$park = $wpdb->get_row("SELECT COUNT(1) as count
FROM wp_richreviews
WHERE review_status='1'");
then you have a nice easily accessible property called count
echo $park->count;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/36337453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Need Help Generating NULL Entries for Months with No Orders I have to create output that shows all fields from a table (Tbl) and create an additional column that calculates Cumulative Sum for each customer by month, (eg if a customer has two sales in April, the new column will have the Sum of those sales and any prior sales on both rows). That much I can do.
My issue is generating rows every month for every client even when they have no sales, and still having the cumulative column show the previous month's cumulative sum correctly.
Desired output:
Picture Link
Customer_ID Order_ID Order_Date Order_Amt_Total_USD Month_ID Cum_Total_By_Month
John 123 4/4/2019 30 Jun-19 120
John 124 4/12/2019 90 Jun-19 120
Mark null null null Jun-19 0
Sally 150 4/20/2019 50 Jun-19 50
John null null null Jul-19 120
Mark 165 7/7/2019 80 Jul-19 170
Mark 166 7/7/2019 90 Jul-19 170
Sally 160 7/5/2019 75 Jul-19 125
John null null null Aug-19 120
Mark null null null Aug-19 170
Sally null null null Aug-19 125
I'll list the code below, but this is a link to a SQL fiddle with sample data and the two queries of the pieces that i have working (with help from you wonderful people on this site). http://sqlfiddle.com/#!15/1d86b/11
I can generate the desired Cumulative running sum by customer and month using the first query.
I can also generate a base table that gives me a month_id for every customer for every month in the second query.
I need help making the combination of those two that will generate the desired output with the null rows for when months/Customers don't have any sales.
Any ideas? Thanks!
-- Generates cumulative total by month by Customer, but only shows when they have a sale
SELECT
Customer_ID, Order_Date, order_id, Order_Amt_Total_USD,
to_char(date_trunc('month', Order_Date), 'Mon YYYY') AS mon_text,
(Select
sum(Order_Amt_Total_USD)
FROM tbl t2
WHERE t2.Customer_ID = t.Customer_ID
AND date_trunc('month', t2.Order_Date) <= t.Order_Date ) AS Cumulative
FROM tbl t
GROUP BY mon_text, Customer_ID, Order_Date, order_id, Order_Amt_Total_USD
ORDER BY date_trunc('month', Order_Date), Customer_ID, Order_Date
;
-- Generates Proper List of All Month IDs for each Customer from entered date through today
WITH temp AS (
SELECT date_trunc('month', Order_Date) AS mon_id
FROM tbl
)
Select
Customer_ID,
to_char(mon_id, 'Mon YYYY') AS mon_text
From tbl,
generate_series('2015-01-01'::date, now(), interval '1 month') mon_id
LEFT JOIN temp USING (mon_id)
GROUP BY mon_id,Customer_ID
;
A: Based on your description, you can combine window functions with generate_series():
SELECT c.Customer_ID, mon,
SUM(Order_Amt_Total_USD) as month_total,
SUM(SUM(Order_Amt_Total_USD)) OVER (PARTITION BY c.Customer_ID ORDER BY mon) as running_total
FROM (SELECT DISTINCT Customer_Id FROM tbl) c CROSS JOIN
generate_series('2015-01-01'::date, now(), interval '1 month') mon LEFT JOIN
tbl t
ON t.Customer_Id = c.customer_id and
date_trunc('month', t.Order_Date) = mon
GROUP BY c.Customer_ID, mon
ORDER BY 1, 2;
Here is a SQL Fiddle.
A: Sample below shows how you can use partition by to achieve the output-
Schema
CREATE TABLE tbl (Order_ID varchar(10), Customer_ID varchar(10), Order_Date date, Order_Amt_Total_USD bigint);
INSERT INTO tbl (Order_ID, Customer_ID, Order_Date, Order_Amt_Total_USD)
VALUES
('100', 'qwe', '2015-08-04', 6),
('101', 'qwe', '2015-05-20', 7),
('102', 'qwe', '2015-04-08', 8),
('103', 'qwe', '2015-04-07', 9),
('109', 'aaa', '2015-04-28', 1),
('110', 'aaa', '2015-04-28', 2),
('111', 'aaa', '2015-05-19', 3),
('112', 'aaa', '2015-08-06', 4),
('113', 'aaa', '2015-08-27', 5),
('114', 'aaa', '2015-08-07', 6)
Query
select Order_Date , Customer_ID, Order_Amt_Total_USD,
sum(Order_Amt_Total_USD) over (partition by Customer_ID order by Order_Date) as cumulative
from tbl;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/62602359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Service Fabric - Main Method not called I am building a Service and I want to deploy it to my local 1-Node-Cluster. As soon as I publish my Service to the local cluster, the cluster fails with the following error:
Partitions - Error - Unhealthy partitions: 100% (1/1), MaxPercentUnhealthyPartitionsPerService=0%.
Partition - Error - Unhealthy partition: PartitionId='...', AggregatedHealthState='Error'.
Event - Error - Error event: SourceId='System.FM', Property='State'. Partition is below target replica or instance count. fabric:/MyFabric/MyFabricService 1 1 ...
(Showing 0 out of 0 replicas. Total up replicas: 0.)
The strange thing is this: I set a breakpoint at the very beginning of my Main-Method, and it never triggers, i.e. the Main Method of my Service is never called.
I read that comparable issues might show up when you have to little memory or disc space available. The system I am running on has 10 GB RAM and 80 GB free disc space, so I think this cannot be the issue here.
Any idea what might be wrong? Ideas how to fix it?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/40419227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Does someone knows the difference between xavier_normal_ and kaiming_normal_? Like title. Does someone knows the difference between xavier_normal_ and kaiming_normal_? Only Xavier have an argument 'gain' more than kaiming?
A: Read the documentation:
xavier_normal_
Fills the input Tensor with values according to the method described in βUnderstanding the difficulty of training deep feedforward neural networksβ - Glorot, X. & Bengio, Y. (2010), using a normal distribution.
kaiming_normal_
Fills the input Tensor with values according to the method described in βDelving deep into rectifiers: Surpassing human-level performance on ImageNet classificationβ - He, K. et al. (2015), using a normal distribution.
The given equations are completely different. For more details you'll have to read those papers.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/53881908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to define a piecewise map in sage I want to define in Sage the following
map. I've attempted to use the following code in Sage:
gamma=function('gamma')
gamma(t)=piecewise([(t>0,(t,0,e^(-1/t^2))),(t==0,(0,0,0)),(t<0,(t,e^(-1/t^2),0))])
This, however, gives me the error TypeError: unable to convert (t, 0, e^(-1/t^2)) to a symbolic expression. How could I change it to create such a type of map?
A: It seems piecewise does not support vector-valued functions.
Possible workaround: define each coordinate as a piecewise function.
sage: gamma(t) = (t,
....: piecewise([(t <= 0, 0), (t > 0, exp(-t^-2))]),
....: piecewise([(t < 0, exp(-t^-2)), (t > 0, 0)]))
....:
sage: gamma
t |--> (t,
piecewise(t|-->0 on (-oo, 0], t|-->e^(-1/t^2) on (0, +oo); t),
piecewise(t|-->e^(-1/t^2) on (-oo, 0), t|-->0 on (0, +oo); t))
sage: parametric_plot(gamma, (t, -1, 1))
Launched html viewer for Graphics3d Object
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/71728771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: YouTube API Search v3 - Start index? I'm using the YouTube Search API to grab 5 videos per time under a specific keyword. But I've been trying and trying, but couldent find the parameter for the start index. Does anyone know how to add it, so it gets the next 5 videos etc..?
Current URL I have:
https://www.googleapis.com/youtube/v3/search?part=snippet&q=wiz+khalifa&type=video&key=AIzaSyB9UW36sMDA9rja_J0ynSYVcNY4G25
A: In the results of your first query, you should get back a nextPageToken field.
When you make the request for the next page, you must send this value as the pageToken.
So you need to add pageToken=XXXXX to your query, where XXXXX is the value you received in nextPageToken.
Hope this helps
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/23449577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Angular 8: "could not find implementation for builder" when running ng serve I just installed Angular 8. When I try to run a project with "ng serve --open" it gives me the following error:
Could not find the implementation for builder @angular-devkit/build-angular:dev-server
Error: Could not find the implementation for builder @angular-devkit/build-angular:dev-server
at WorkspaceNodeModulesArchitectHost.resolveBuilder (C:\Users\manus\project\space-estate\webshop\frontend\client\node_modules\@angular\cli\node_modules\@angular-devkit\architect\node\node-modules-architect-host.js:49:19)
at ServeCommand.initialize (C:\Users\manus\project\space-estate\webshop\frontend\client\node_modules\@angular\cli\models\architect-command.js:72:63)
at process._tickCallback (internal/process/next_tick.js:68:7)
at Function.Module.runMain (internal/modules/cjs/loader.js:744:11)
at startup (internal/bootstrap/node.js:285:19)
at bootstrapNodeJSCore (internal/bootstrap/node.js:739:3)
when I check the versions it gives me the following:
Angular CLI: 8.0.1
Node: 10.13.0
OS: win32 x64
Angular: 8.0.0
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
Package Version
-----------------------------------------------------------
@angular-devkit/architect 0.13.9
@angular-devkit/build-angular 0.13.9
@angular-devkit/build-optimizer 0.13.9
@angular-devkit/build-webpack 0.13.9
@angular-devkit/core 8.0.1
@angular-devkit/schematics 8.0.1 (cli-only)
@ngtools/webpack 7.3.9
@schematics/angular 8.0.1 (cli-only)
@schematics/update 0.800.1 (cli-only)
rxjs 6.5.2
typescript 3.4.5
webpack 4.29.0
A: use this command
npm install --save-dev @angular-devkit/build-angular
A: If you are using Angular 8, you should ensure your Angular packages are safely updated to the current stable version by running the following command
ng update
Otherwise, you can try to manually update the @angular/cli and core framework package manually.
ng update @angular/cli @angular/core
A: It seems to be an issue with @angular-devkit/build-angular, try downgrading to a specific version
npm i @angular-devkit/build-angular@0.803.24
This version worked for me.
A: I solved that problem by below comment. Hopefully, it will work for you too
npm i --save-dev @angular-devkit/build-angular@latest
A: I Have followed following steps and it worked for give it a try.
Delete current project if its new new one or get backup of your project.
update angular cli globally by
npm i -g @angular/cli@8.0.2
then create a new project.
ng new name
ng serve
thats it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/56357007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How to I calculate the height of visible? I have a html page like the following model,
when user is scrolling the page, how to I calculate the height of visible part of the #page1?
-------------------------------------------------------
#heder (position: fixed; height: 100px, z-index: 10)
-------------------------------------------------------
visible zone
____________________________________________________
| |
| #page1 (visible) |
| |
| (position: static; height: 1000px, z-Index: 0) |
| |
-------------------------------------------------------
#footer (position: fixed; height: 50px, z-index: 10)
-------------------------------------------------------
| |
| |
| |
| |
| #page1 (invisible) |
| |
| |
| |
| |
____________________________________________________
| |
| |
| |
| #page2 (invisible) |
| |
| (position: static; height: 700px, z-Index: 0) |
| |
| |
| |
| |
| |
____________________________________________________
A: you can get the height of the outermost window by using window.top in the jQuery. The height of window.top will get the height of the browser window or iframe inside it.
$(window.top).height();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/15846610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Divide list items by height when list items surpass UL height I'm trying to create slides from dynamic content. Basically what I am trying to do is lets say I have a <ul> which is 300x100. I float the the list items and when the list items surpass the height of the <ul>. I want to wrap those list items in a div so that I can have x number of list items divided by height.
I'm trying something like this but I don't know if I am going in the right direction or where to go from here.
CSS:
<style>
ul {
height: 100px;
width: 300px;
}
ul li {
float: left;
}
</style
HTML:
<ul>
<li>first</li>
<li>ffgggfs</li>
<li>sffsfsf</li>
<li>jgjghjgfj</li>
<li>trtretert</li>
<li>ghhfhfhgf</li>
<li>sdfsdfsdf</li>
<li>fghjjh</li>
<li>iuyuiy</li>
<li>cvcvc</li>
<li>hgjhjg</li>
<li>tryryre</li>
<li>kkhjkhjk</li>
<li>sdfsdfsdf</li>
<li>khjkhjk</li>
<li>adfsfafsaf</li>
<li>syuuyuyu</li>
<li>sweeerre</li>
<li>last</li>
</ul>
jQuery:
<script>
var sumHeight = 0;
$('ul li').filter(function() {
var $this = $(this),
pHeight = $this.parent().height(); // parent inner height
sumHeight += $this.outerHeight(true); // + block outer height
return sumHeight < pHeight;
}).insertAfter('ul').wrapAll('<div></div>');
A: Ok, I've not given you exactly what you asked for but it can be modified as you see fit.
In this FIDDLE I use divs instead of ul/li, and I think they can be styled in any way you want.
There is a holder div, into which are added holders for each of your individual divs.
By changing maxheight, you can control how many lines go into each of your "slides".
var maxheight = 100;
var divlistnum = 0;
$('.holder').append("<div class='" + "slidelist" + divlistnum + "'></div>");
$('#clickme').click(function(){
$(".slidelist" + divlistnum ).append("<div class='littlelist'>Stuff to Add</div>");
if( $(".slidelist" + divlistnum).height() > maxheight )
{
divlistnum = divlistnum + 1;
$('.holder').append("<div class='" + "slidelist" + divlistnum + "'></div>");
}
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/22016373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: python code to reverse an array without using the function My task is to create an empty array and take the input from the user, but I have to print the input element in a reversed order without the function, that too in an array itself.
x=int(input('how many cars do you have'))
a=[]
for i in range(x):
car=(input('enter your car name'))
a.append(car)
print(a)
y=[]
for i in range(length(a)-1,-1,-1):
y.append(a[i])
print (y)
Why am i getting repeated reverse array output with this code. Can anyone please tell me what's wrong in this
A: Use slicing on any sequence to reverse it :
print(list[::-1])
A: If your looking for a method from scratch without using any built-in function this one would do
arr = [1,2,3,5,6,8]
for i in range(len(arr)-1):
arr.append(arr[-2-i])
del(arr[-3-i])
print(arr)
# [8, 6, 5, 3, 2, 1]
A: Do you want to reverse your list?
a = ['car1', 'car2', 'car3']
a.reverse()
print(a)
['car3', 'car2', 'car1']
A: you should use slicing. for example,
a = ['1','2','3']
print(a[::-1])
will print ['3','2','1']. I hope this will work for you.
Happy Learning!
A: Your code is giving me the correct output except that len() is used to calculate the length of a list and not length().
x=int(input('how many cars do you have'))
a=[]
for i in range(x):
car=(input('enter your car name'))
a.append(car)
print(a)
y=[]
for i in range(len(a)-1,-1,-1):
y.append(a[i])
print(y)
A: Simple Custom way (without using built-in functions) to reverse a list:
def rev_list(mylist):
max_index = len(mylist) - 1
return [ mylist[max_index - index] for index in range(len(mylist)) ]
rev_list([1,2,3,4,5])
#Outputs: [5,4,3,2,1]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/68554930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Reading a JSON file which contains enumeration and class separately both using c# I have a json file.
{
"Enumerations" : {
"IndianCities" : [
{ "key" : "Delhi", "val" : 001},
{ "key" : "Mumbai", "val" : 002},
],
.
.
},
"Users":[
{
"AccessType" : "Admin",
"Male" : [
{
"Name" : "Xyz",
"ID" : 459,
},
{
"Name" : "Abc",
"ID" : 542,
}
],
"Female" : [
{
"Name" : "Abc",
"ID" : 543,
}
]
},
{
"Location" : "NewYork",
"Male" : [
{
"Name" : "Xyz",
"ID" : 460,
},
{
"Name" : "Abc",
"ID" : 642,
}
],
"Female" : [
{
"Name" : "Abc",
"ID" : 643,
}
]
},
]
Here, I want to store all enum data(both key and val info) in a list. Also from users class, I want to store info of all male users from all nested classes. Here, I wrote two classes names("Admin" and "NewYork") but in my json I have around 100 classes. Is there any generic solution available for reading male info all classes and put in a list.
A: Welcome to Stackoverflow!
In the future, please make sure if you are asking a question, the code you post has no errors or if it does have errors, then your question can be about fixing the error.
Your JSON is not valid JSON.
Step 1
Fix your JSON. Here is the corrected version of your JSON:
{
"Enumerations" : {
"IndianCities" : [
{ "key" : "Delhi", "val" : 001},
{ "key" : "Mumbai", "val" : 002}
]
},
"Users":[
{
"AccessType" : "Admin",
"Male" : [
{
"Name" : "Xyz",
"ID" : 459
},
{
"Name" : "Abc",
"ID" : 542
}
],
"Female" : [
{
"Name" : "Abc",
"ID" : 543
}
]
},
{
"Location" : "NewYork",
"Male" : [
{
"Name" : "Xyz",
"ID" : 460
},
{
"Name" : "Abc",
"ID" : 642
}
],
"Female" : [
{
"Name" : "Abc",
"ID" : 643
}
]
}
]
}
Step 2
As I mentioned in my comment, refer the answer here to create C# classes to represent your JSON. I did it, and it generated this:
public class Rootobject
{
public Enumerations Enumerations { get; set; }
public User[] Users { get; set; }
}
public class Enumerations
{
public Indiancity[] IndianCities { get; set; }
}
public class Indiancity
{
public string key { get; set; }
public int val { get; set; }
}
public class User
{
public string AccessType { get; set; }
public Male[] Male { get; set; }
public Female[] Female { get; set; }
public string Location { get; set; }
}
public class Male
{
public string Name { get; set; }
public int ID { get; set; }
}
public class Female
{
public string Name { get; set; }
public int ID { get; set; }
}
Step 3
Is there any generic solution available for reading male info all classes and put in a list.
Yes! But what you really mean is:
Is there any generic solution available for reading all males into a list?
Use Newtonsoft.Json library to deserialize the JSON content. Please note each User instance has a Male array so you need to use SelectMany to flatten the data as shown below:
Rootobject ro = JsonConvert.DeserializeObject<Rootobject>("YourJSON");
List<Male> males = ro.Users.SelectMany(x => x.Male).ToList();
You may do other filtration if you require.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/46263861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Temp JSON data - Swift So I am trying to create a temp JSON depending on whether a listener is tuned into a podcast or a radio station.
Because we already have all the podcast info we don't need to ask a remote JSON for it. But we still need to have the ability that when the MusicPlayer is playing that in the View it shows the cover, artist and title.
So my thinking was create a simple JSON Struct inside the MusicPlayer class and when the media player sends the data to the following
func getArtBoard(artist: String, song: String, cover: String) {
guard let url = URL(string: cover) else { return }
getData(from: url) { [weak self] image in
guard let self = self,
let downloadedImage = image else {
return
}
let artwork = MPMediaItemArtwork.init(boundsSize: downloadedImage.size, requestHandler: { _ -> UIImage in
return downloadedImage
})
self.nowplaying(with: artwork, artist: artist, song: song)
}
}
It would also save it temp to - which is inside the
class MusicPlayer {
struct NowPlaying: Codable, Identifiable {
var id: ObjectIdentifier
var artist : String
var song : String
var cover : String
}
//more code here
}
However I am getting two errors
*
*Type 'MusicPlayer.NowPlaying' does not conform to protocol 'Decodable'
*Type 'MusicPlayer.NowPlaying' does not conform to protocol 'Encodable'
Questions:
*
*How do I make it to conform. - But we able to call it via Music.NowPlaying.
*Is this the best way to do it, or can I access the MPMediaItemPropertyTitle in View?
The complete code.
import Foundation
import AVFoundation
import MediaPlayer
import AVKit
class MusicPlayer {
static let shared = MusicPlayer()
static var mediatype = ""
struct NowPlaying: Codable, Identifiable {
var id: ObjectIdentifier
var artist : String
var song : String
var cover : String
}
var player: AVPlayer?
let playerViewController = AVPlayerViewController()
func gettype(completion: @escaping (String) -> Void){
completion(MusicPlayer.mediatype)
}
func getNowPlayingView(completion: @escaping (String) -> Void){
completion(MusicPlayer.mediatype)
}
func startBackgroundMusic(url: String, type:String) {
MusicPlayer.mediatype = String(type)
//let urlString = "http://stream.radiomedia.com.au:8003/stream"
let urlString = url
guard let url = URL.init(string: urlString) else { return }
let playerItem = AVPlayerItem.init(url: url)
player = AVPlayer.init(playerItem: playerItem)
do {
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, options: [.duckOthers, .defaultToSpeaker, .mixWithOthers, .allowAirPlay])
print("Playback OK")
// let defaults = UserDefaults.standard
// defaults.set("1", forKey: defaultsKeys.musicplayer_connected)
try AVAudioSession.sharedInstance().setActive(true)
print("Session is Active")
} catch {
// let defaults = UserDefaults.standard
// defaults.set("0", forKey: defaultsKeys.musicplayer_connected)
print(error)
}
#if targetEnvironment(simulator)
self.playerViewController.player = player
self.playerViewController.player?.play()
print("SIMULATOR")
#else
self.setupRemoteTransportControls()
player?.play()
#endif
}
func startBackgroundMusicTwo() {
let urlString = "http://stream.radiomedia.com.au:8003/stream"
//let urlString = url
guard let url = URL.init(string: urlString) else { return }
let playerItem = AVPlayerItem.init(url: url)
player = AVPlayer.init(playerItem: playerItem)
do {
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, options: [.duckOthers, .defaultToSpeaker, .mixWithOthers, .allowAirPlay])
print("Playback OK")
// let defaults = UserDefaults.standard
// defaults.set("1", forKey: defaultsKeys.musicplayer_connected)
try AVAudioSession.sharedInstance().setActive(true)
print("Session is Active")
} catch {
// let defaults = UserDefaults.standard
// defaults.set("0", forKey: defaultsKeys.musicplayer_connected)
print(error)
}
#if targetEnvironment(simulator)
self.playerViewController.player = player
self.playerViewController.player?.play()
print("SIMULATOR")
#else
self.setupRemoteTransportControls()
player?.play()
#endif
}
func setupRemoteTransportControls() {
// Get the shared MPRemoteCommandCenter
let commandCenter = MPRemoteCommandCenter.shared()
// Add handler for Play Command
commandCenter.playCommand.addTarget { [unowned self] event in
if self.player?.rate == 0.0 {
self.player?.play()
return .success
}
return .commandFailed
}
// Add handler for Pause Command
commandCenter.pauseCommand.addTarget { [unowned self] event in
if self.player?.rate == 1.0 {
self.player?.pause()
return .success
}
return .commandFailed
}
// self.nowplaying(artist: "Anna", song: "test")
}
func nowplaying(with artwork: MPMediaItemArtwork, artist: String, song: String){
MPNowPlayingInfoCenter.default().nowPlayingInfo = [
MPMediaItemPropertyTitle:song,
MPMediaItemPropertyArtist:artist,
MPMediaItemPropertyArtwork: artwork,
MPNowPlayingInfoPropertyIsLiveStream: true
]
// self.getArtBoard();
}
func setupNowPlayingInfo(with artwork: MPMediaItemArtwork) {
MPNowPlayingInfoCenter.default().nowPlayingInfo = [
// MPMediaItemPropertyTitle: "Some name",
// MPMediaItemPropertyArtist: "Some name",
MPMediaItemPropertyArtwork: artwork,
//MPMediaItemPropertyPlaybackDuration: CMTimeGetSeconds(currentItem.duration),
//MPNowPlayingInfoPropertyPlaybackRate: 1,
//MPNowPlayingInfoPropertyElapsedPlaybackTime: CMTimeGetSeconds(currentItem.currentTime())
]
}
func getData(from url: URL, completion: @escaping (UIImage?) -> Void) {
URLSession.shared.dataTask(with: url, completionHandler: {(data, response, error) in
if let data = data {
completion(UIImage(data:data))
}
})
.resume()
}
func getArtBoard(artist: String, song: String, cover: String) {
guard let url = URL(string: cover) else { return }
getData(from: url) { [weak self] image in
guard let self = self,
let downloadedImage = image else {
return
}
let artwork = MPMediaItemArtwork.init(boundsSize: downloadedImage.size, requestHandler: { _ -> UIImage in
return downloadedImage
})
self.nowplaying(with: artwork, artist: artist, song: song)
}
}
func stopBackgroundMusic() {
guard let player = player else { return }
player.pause()
}
}
UPDATE
Got the above error solved thanks to the comment below.
However am having issues fetching the data.
Type 'MusicPlayer.NowPlayingData.Type' cannot conform to 'Encodable'; only struct/enum/class types can conform to protocols
The code I used. - https://developer.apple.com/documentation/foundation/jsonencoder
let encoder = JSONEncoder()
encoder.outputFormatting = .prettyPrinted
let data = try encoder.encode(NowPlayingData)
print(String(data: data, encoding: .utf8)!)
To write it I placed
func getArtBoard(artist: String, song: String, cover: String) {
MusicPlayer.NowPlayingData(artist: artist, song: song, cover: cover)
}
DATA trying to encode
func getArtBoard(artist: String, song: String, cover: String) {
//MusicPlayer.NowPlayingData(artist: artist, song: song, cover: cover)
let pear = "{'artist':\(artist), 'song': \(song), 'cover':\(cover)}"
let encoder = JSONEncoder()
encoder.outputFormatting = .prettyPrinted
MusicPlayer.JN = try encoder.encode(pear)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/63542607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: python send blockchain module with socket programming I am a beginner in python I create a blockchain code and make three transactions and generate hashes I want to transfer any hash through socket programming. but I am getting an error On the client-side. I am receiving nothing. Kindly help how can I receive those hashes on the client side Can anyone please tell what i am doing wrong ?
Server Side
import socket
import pandas as pd
import pickle
import hashlib
df = pd.read_csv(r"C:\Users\DELL\OneDrive\Desktop\mythesisdataset.csv",
names=[ 'SBP', 'DBP', 'HEARTRATE', "Temperature" ])
normal_df = (df [ (df.SBP > 120) & (df.DBP > 90) & (df.HEARTRATE < 100) & (df [ 'Temperature' ] < 100) ])
class Block:
def __init__( self, previous_block_hash, transaction_list ):
self.previous_block_hash = previous_block_hash
self.transaction_list = transaction_list
self.block_data = f"{' - '.join(transaction_list)} - {previous_block_hash}"
self.block_hash = hashlib.sha256(self.block_data.encode()).hexdigest()
class Blockchain:
def __init__( self ):
self.chain = [ ]
self.generate_genesis_block()
def generate_genesis_block( self ):
self.chain.append(Block("0", [ 'Genesis Block' ]))
def create_block_from_transaction( self, transaction_list ):
previous_block_hash = self.last_block.block_hash
self.chain.append(Block(previous_block_hash, transaction_list))
def display_chain( self ):
for i in range(len(self.chain)):
print(f"Hash {i + 1}: {self.chain [ i ].block_hash}\n")
@property
def last_block( self ):
return self.chain [ -1 ]
t1 = "abcdergfhiklm"
t2 = "abcdefghijklmnopqrstuvwxyz"
t3 = normal_df
myblockchain = Blockchain()
myblockchain.create_block_from_transaction(t1)
myblockchain.create_block_from_transaction(t2)
myblockchain.create_block_from_transaction(t3)
myblockchain.display_chain()
while True:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host = socket.gethostname()
port = 5007
s.bind((host, port))
print("host name:", host, " socket name:", socket)
s.listen()
c, addr = s.accept()
# print('Got connection from', addr, '...')
df2 = pickle.dumps(print(myblockchain.display_chain()))
c.send(df2)
c.close() # Close the connection
Clinet side
import socket
import pickle
# Create a # socket (SOCK_STREAM means a TCP socket)
HOST = 'localhost'
PORT = 50007
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
# Connect to server and send data
sock.connect((socket.gethostname(), 5007))
# Receive data from the server and shut down
data = (sock.recv(4096))
df3 = pickle.loads(data)
print(df3)
finally:
sock.close()
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72279663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Firebase dynamic link (console created) support custom query parameters using ActionCodeSettings and Password Reset? Is there a way to pass custom query parameters in Firebase Dynamic Links that are created using the console?
My workflow is as follows:
*
*Reset Password screen that takes email and executes sendPasswordResetEmail(email, settings)
*Go to email and select link which opens app to screen to reset password: https://xxxxx.page.link?link=https://xxxx-00000.firebaseapp.com/__/auth/action?apiKey%3DAIzaSyDxTJUhYNwbMpoRhRWde74tAqV0CMKHh_o%26mode%3DresetPassword%26oobCode%3DccgIWg7D-FPtRTp2OXon8UaIB1AL0_qpktnAL--P-eMAAAFsgjDmkw%26continueUrl%3Dhttps://example.com/%26lang%3Den&apn=com.example&amv
*App is launched and we go to the screen to enter a new password.
*Call FirebaseDynamicLinks.getInstance().getDynamicLink(getIntent())
*Use pendingDynamicLinkData.getLink().getQueryParameter("oobCode")); to get query parameters from the dynamic link (in this case get our password code that we use to reset the password.
*Use oobCode in previous step and call FirebaseAuth.getInstance().confirmPasswordReset(actionCode, password) on button click (grabbing password from use inputted editText field) to reset password.
Ideally I would like to login my user after the password reset. In order to do this I need to have the email address used in the password reset (I need to get firestore document information from the user trying to login).
So I need to be able to pass the email address to the screen to reset the password.
Here are my relevant code snippets:
Initial "Forgot password screen": User enters email here.
First build my action code settings, then execute the sendPasswordResetEmail() method.
String url = "https://example.com"; //my deep link set in Firebase console
ActionCodeSettings settings = ActionCodeSettings.newBuilder()
.setAndroidPackageName(
getPackageName(),
true, /* install if not available? */
null /* minimum app version */)
.setHandleCodeInApp(true)
.setUrl(url)
.build();
mAuth.sendPasswordResetEmail(email, settings)
.addOnCompleteListener(new OnCompleteListener<Void>() {
@Override
public void onComplete(@NonNull Task<Void> task) {
if (task.isSuccessful()) {
Log.d(TAG, "Email sent.");
}
else {
Exception e = task.getException();
Log.w(TAG, "passwordResetRequest:failure " + e.getMessage(), task.getException());
if (e instanceof FirebaseAuthInvalidCredentialsException) {
}
}
}
});
Then after selecting link in email we go to the reset password screen.
in here we call the following to get parameters from the dynamic link:
FirebaseDynamicLinks.getInstance()
.getDynamicLink(getIntent())
.addOnSuccessListener(this, new OnSuccessListener<PendingDynamicLinkData>() {
@Override
public void onSuccess(PendingDynamicLinkData pendingDynamicLinkData) {
// Get deep link from result (may be null if no link is found)
Uri deepLink = null;
if (pendingDynamicLinkData != null && pendingDynamicLinkData.getLink() != null) {
deepLink = pendingDynamicLinkData.getLink();
actionCode = deepLink.getQueryParameter("oobCode");
actionMode = deepLink.getQueryParameter("mode");
}
}
})
.addOnFailureListener(this, new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception e) {
Log.w(TAG, "getDynamicLink:onFailure", e);
}
});
Finally we use the oobCode (action code to permit a password reset), and on button click and user input for a new password, we reset the password using:
resetPass.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if (actionCode != null && !actionCode.equals(""))
FirebaseAuth.getInstance().confirmPasswordReset(actionCode, newPassword.getText().toString()).addOnCompleteListener(new OnCompleteListener<Void>() {
@Override
public void onComplete(@NonNull Task<Void> task) {
if (task.isSuccessful() && task.getResult() != null)
Log.d(TAG, "Deep Link confirmPassReset: " + task.getResult().toString());
}
});
}
});
Here's where I'm struggling. I'm trying to pass an email address when building the ActionCodeSettings. For example:
String url = "https://example.com/?email=jsmith@gmail.com"; And then try to get them in the reset password screen using:
deepLink.getQueryParameter("email");
But everytime I try this I keep getting null. What am I missing. Is it even possible to pass custom query parameters with a dynamic link that was created in the Firebase console? If not, what is the best way to accomplish this?
Any help would be greatly appreciated. Thanks in advance!
A: You need get the link first
val actionUrl = deepLink.getQueryParameter("continueUrl")
And get the email value with subString
actionUrl.substring(actionURL.lastIndexOf("=") + 1, actionURL.length)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/57454123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Change select option on scroll I am using Waypoints (github.com/imakewebthings/waypoints) to change the selected nav option as you scroll down the page. However I am switching to a select dropdown on mobile for the nav. I'm wondering if it's possible to add this functionality using waypoints and change the selected option my my select as I scroll past these sections.
Here is the function changing my nav on desktop
function getRelatedNavigation(el){
return $('.desktopnav ul li a[href=#'+$(el).attr('id')+']');
}
A: What you want to do is check all the elements y position in the document and return the one that is just passed?
function getRelatedNavigation() {
var elements = $('.anchor');
var currentScroll = $(document).scrollTop();
elements.each(function(index,value) {
if (currentScroll > $(value).position.y) return $(value);
});
});
$('.desktopnav ul li a[href="#'+getRelatedNavigation().attr('id')+'"]').addClass('selected').siblings().removeClass('selected');
$('#mobileselect').children().removeAttr('selected');
$('#mobileselect').children('[value="'+getRelatedNavigation().attr('id')+'"]').attr('selected','selected');
I dont know what labeling you are using for your waypoints, or anchors. In the example above I would mark them as class='anchor'
so
// Navigation
<ul class='desktopnav'>
<li><a href="#something">something</a></li>
<li><a href="#other">other</a></li>
</ul>
//// Ways down
<div id='something' class='anchor'>some content</div>
<div id='other' class='anchor'>other content</div>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30265242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Move/copy folders by groups I have a folder containing subfolders named; *_1, *_2, *_3, *_4 . . . *_1000.
Then there is another set of folders named: Destination_folder1, Destination_folder2, Destination_folder3, ....Destination_folder10.
I would like to move (or copy) the subfolders by groups of 100 in Destination_folders* so that: the Destination_folder1 will contain subfolders *_1: *_100; Destination_folder2 will contain subfolders *_101: *_200 and so on and so forh. I tried to use:
for i in {1..100}
do
cp -r *_$((i)) Destination_folder$i/
done
but unfortunately folders are not copied by groups, but instead they are copied
individually. Can anyone help me please?
Best regards
A: Use a second loop (remove the word echo when you are happy):
for i in {1..10}; do
for j in {1..100}; do
(( dir = 100 * (i - 1) + j ))
echo cp -r *_$((dir)) Destination_folder${i}/
done
done
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31114443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: how to show images in grid view? I have a database which contains list of image path. My problem is I have to show all the images in a grid view.
I have taken list of path from database but i am not able to show all the images in grid layout.
Please help me.
Thanks
A: In gridview setAdapter() as you normally do for custom ListView by extending BaseAdapter and passing the arraylist of paths to this adapter class.
Now in getView() method of Adapter add following lines
Bitmap mBitmap = BitmapFactory.decodeFile(path);
mView.setImageBitmapReset(mBitmap, 0, true);
where mView is an ImageView and path will be fetched from arraaylist that you pass to adapter
A: public class MainActivity extends Activity {
String[] prgmNameList = {"Grid_1", "Grid_1", "Grid_1"};
int[] prgmImages = new int[] {R.drawable.ic_launcher, R.drawable.ic_launcher, R.drawable.ic_launcher};
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Gridimageadapter gpa = new Gridimageadapter(MainActivity.this, prgmNameList, prgmImages);
GridView gridView = (GridView) findViewById(R.id.gridView1);
gridView.setAdapter(gpa);
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/5977334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Why am I able to reference an outer queries columns from within a subquery? I would expect that if I write a subquery then the columns available in the
main query would be unavailable in the subquery. This doesn't seem to be
the case and I don't understand why.
Take the following example tables and query:
CREATE TABLE test_1 (
col_a TEXT
);
CREATE TABLE test_2 (
col_b TEXT
);
INSERT INTO test_1
VALUES
('Bob'),
('Tim')
;
INSERT INTO test_2
VALUES
('Sam'),
('Tim')
;
SELECT
col_a
FROM
test_1
WHERE col_a IN (SELECT col_a FROM test_2);
In the subquery SELECT col_a FROM test_2 I would expect to get an error because col_a doesn't exist in the table test_2. Instead the subquery returns the contents of col_a from table_1.
The output I get is:
col_a
-------
Bob
Tim
I'm running the following version of PostgreSQL:
PostgreSQL 9.5.13 on x86_64-pc-linux-gnu
A: When you have multiple tables in a query, always use qualified table names. You think the query is doing:
SELECT t1.col_a
FROM test_1 t1
WHERE t1.col_a IN (SELECT t2.col_a FROM test_2 t2);
This would generate an error, because t2.col_a does not exist.
However, the scoping rules for subqueries say that if the column is not in the subquery, look in the outer query. So, if t2.col_a does not exist, then the query turns into:
SELECT t1.col_a
FROM test_1 t1
WHERE t1.col_a IN (SELECT t1.col_a FROM test_2 t2);
The solution is to qualify all column references so there is no ambiguity.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/51794116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: OPC UA Server failed to start due exception: Cannot access certificate private key My OPC UA Server fails to start due Cannot access certificate private key exception. It really confuse me, because I'm using AnonymousUserTokenPolicy. Last time I have run server, it was all ok. I didn't change config file neither relevant code. I've spent hours searching on OPCFoundation with no success. Could anyone help me? Thanks in advance.
this is my config file:
<?xml version="1.0" encoding="utf-8"?>
<ApplicationConfiguration
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:ua="http://opcfoundation.org/UA/2008/02/Types.xsd"
xmlns="http://opcfoundation.org/UA/SDK/Configuration.xsd">
<ApplicationName>RoboBar.TestServer</ApplicationName>
<ApplicationUri>urn:localhost:RoboBar.TestServer</ApplicationUri>
<ProductUri>https://github.com/testbedCIIRC/RoboBar-Server</ProductUri>
<ApplicationType>Server_0</ApplicationType>
<SecurityConfiguration>
<!-- Where the application instance certificate is stored (MachineDefault) -->
<ApplicationCertificate>
<StoreType>Directory</StoreType>
<StorePath>%CommonApplicationData%\OPC Foundation\CertificateStores\MachineDefault</StorePath>
<SubjectName>RoboBar.TestServer</SubjectName>
</ApplicationCertificate>
<!-- Where the issuer certificate are stored (certificate authorities) -->
<TrustedIssuerCertificates>
<StoreType>Directory</StoreType>
<StorePath>%CommonApplicationData%\OPC Foundation\CertificateStores\UA Certificate Authorities</StorePath>
</TrustedIssuerCertificates>
<!-- Where the trust list is stored (UA Applications) -->
<TrustedPeerCertificates>
<StoreType>Directory</StoreType>
<StorePath>%CommonApplicationData%\OPC Foundation\CertificateStores\UA Applications</StorePath>
</TrustedPeerCertificates>
<!-- The directory used to store invalid certficates for later review by the administrator. -->
<RejectedCertificateStore>
<StoreType>Directory</StoreType>
<StorePath>%CommonApplicationData%\OPC Foundation\CertificateStores\RejectedCertificates</StorePath>
</RejectedCertificateStore>
<!-- WARNING: The following setting (to automatically accept untrusted certificates) should be used
for easy debugging purposes ONLY and turned off for production deployments! -->
<AutoAcceptUntrustedCertificates>true</AutoAcceptUntrustedCertificates>
</SecurityConfiguration>
<TransportConfigurations></TransportConfigurations>
<TransportQuotas>
<OperationTimeout>600000</OperationTimeout>
<MaxStringLength>1048576</MaxStringLength>
<MaxByteStringLength>1048576</MaxByteStringLength>
<MaxArrayLength>65535</MaxArrayLength>
<MaxMessageSize>4194304</MaxMessageSize>
<MaxBufferSize>65535</MaxBufferSize>
<ChannelLifetime>300000</ChannelLifetime>
<SecurityTokenLifetime>3600000</SecurityTokenLifetime>
</TransportQuotas>
<ServerConfiguration>
<BaseAddresses>
<ua:String>opc.tcp://localhost:26543/RoboBar.TestServer</ua:String>
</BaseAddresses>
<SecurityPolicies>
<ServerSecurityPolicy>
<SecurityMode>None_1</SecurityMode>
<SecurityPolicyUri>http://opcfoundation.org/UA/SecurityPolicy#None</SecurityPolicyUri>
<SecurityLevel>0</SecurityLevel>
</ServerSecurityPolicy>
<ServerSecurityPolicy>
<SecurityMode>SignAndEncrypt_3</SecurityMode>
<SecurityPolicyUri>http://opcfoundation.org/UA/SecurityPolicy#Basic256</SecurityPolicyUri>
<SecurityLevel>5</SecurityLevel>
</ServerSecurityPolicy>
<ServerSecurityPolicy>
<SecurityMode>SignAndEncrypt_3</SecurityMode>
<SecurityPolicyUri>http://opcfoundation.org/UA/SecurityPolicy#Basic256Sha256</SecurityPolicyUri>
<SecurityLevel>6</SecurityLevel>
</ServerSecurityPolicy>
</SecurityPolicies>
<MinRequestThreadCount>5</MinRequestThreadCount>
<MaxRequestThreadCount>100</MaxRequestThreadCount>
<MaxQueuedRequestCount>2000</MaxQueuedRequestCount>
<!-- The SDK expects the server to support the same set of user tokens for every endpoint. -->
<UserTokenPolicies>
<!-- Allows anonymous users -->
<ua:UserTokenPolicy>
<ua:TokenType>Anonymous_0</ua:TokenType>
<ua:SecurityPolicyUri>http://opcfoundation.org/UA/SecurityPolicy#None</ua:SecurityPolicyUri>
</ua:UserTokenPolicy>
<!-- Allows username/password -->
<!-- passwords must be encrypted - this specifies what algorithm to use -->
<!--<ua:UserTokenPolicy>
<ua:TokenType>UserName_1</ua:TokenType>
<ua:SecurityPolicyUri>http://opcfoundation.org/UA/SecurityPolicy#Basic256</ua:SecurityPolicyUri>
</ua:UserTokenPolicy>-->
</UserTokenPolicies>
<DiagnosticsEnabled>true</DiagnosticsEnabled>
<MaxSessionCount>100</MaxSessionCount>
<MinSessionTimeout>10000</MinSessionTimeout>
<MaxSessionTimeout>3600000</MaxSessionTimeout>
<MaxBrowseContinuationPoints>10</MaxBrowseContinuationPoints>
<MaxQueryContinuationPoints>10</MaxQueryContinuationPoints>
<MaxHistoryContinuationPoints>100</MaxHistoryContinuationPoints>
<MaxRequestAge>600000</MaxRequestAge>
<MinPublishingInterval>100</MinPublishingInterval>
<MaxPublishingInterval>3600000</MaxPublishingInterval>
<PublishingResolution>50</PublishingResolution>
<MaxSubscriptionLifetime>3600000</MaxSubscriptionLifetime>
<MaxMessageQueueSize>100</MaxMessageQueueSize>
<MaxNotificationQueueSize>100</MaxNotificationQueueSize>
<MaxNotificationsPerPublish>1000</MaxNotificationsPerPublish>
<MinMetadataSamplingInterval>1000</MinMetadataSamplingInterval>
<AvailableSamplingRates>
<SamplingRateGroup>
<Start>5</Start>
<Increment>5</Increment>
<Count>20</Count>
</SamplingRateGroup>
<SamplingRateGroup>
<Start>100</Start>
<Increment>100</Increment>
<Count>4</Count>
</SamplingRateGroup>
<SamplingRateGroup>
<Start>500</Start>
<Increment>250</Increment>
<Count>2</Count>
</SamplingRateGroup>
<SamplingRateGroup>
<Start>1000</Start>
<Increment>500</Increment>
<Count>20</Count>
</SamplingRateGroup>
</AvailableSamplingRates>
<MaxRegistrationInterval>0</MaxRegistrationInterval>
<NodeManagerSaveFile>RoboBar.TestServer.nodes.xml</NodeManagerSaveFile>
</ServerConfiguration>
<TraceConfiguration>
<OutputFilePath>Logs\RoboBar.TestServer.log.txt</OutputFilePath>
<DeleteOnLoad>true</DeleteOnLoad>
<!-- Show Only Errors -->
<!-- <TraceMasks>1</TraceMasks> -->
<!-- Show Only Security and Errors -->
<!-- <TraceMasks>513</TraceMasks> -->
<!-- Show Only Security, Errors and Trace -->
<!-- <TraceMasks>515</TraceMasks> -->
<!-- Show Only Security, COM Calls, Errors and Trace -->
<!-- <TraceMasks>771</TraceMasks> -->
<!-- Show Only Security, Service Calls, Errors and Trace -->
<!-- <TraceMasks>523</TraceMasks> -->
<!-- Show Only Security, ServiceResultExceptions, Errors and Trace -->
<!-- <TraceMasks>519</TraceMasks> -->
</TraceConfiguration>
</ApplicationConfiguration>
Method that I use to start server (it runs in BackgroundWorker)
private void InitServer()
{
ApplicationInstance app = new ApplicationInstance
{
ApplicationName = "RoboBar.TestServer",
ApplicationType = ApplicationType.Server,
ConfigSectionName = "RoboBar.TestServer"
};
try
{
var configuration = app.LoadApplicationConfiguration(false);
var certificate = app.CheckApplicationInstanceCertificate(false, 0);
certificate.Wait();
var certOK = certificate.Result;
if (!certOK)
{
throw new Exception("Application instance certificate invalid!");
}
var startServer = app.Start(new RoboBarServer());
startServer.Wait();
}
catch (Exception ex)
{
string text = "Exception: " + ex.Message;
if (ex.InnerException != null)
{
text += "\r\nInner exception: ";
text += ex.InnerException.Message;
}
MessageBox.Show(text, app.ApplicationName);
}
Server = app.Server as StandardServer;
Configuration = app.ApplicationConfiguration;
}
Any help would be appreciated.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/64496924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Explain analyze: Total time spent executing a task. Documentation error or my error? I think found a mistake, and a possible correction, in the postgres documentation regarding explain plans.
From: https://www.postgresql.org/docs/current/using-explain.html
Index Scan using tenk2_unique2 on tenk2 t2 (cost=0.29..7.91 rows=1 width=244) (actual time=0.021..0.022 rows=1 loops=10)
"In the above example, we spent a total of 0.220 milliseconds executing the index scans on tenk2."
The docs would seem to indicate Actual Total Time * Actual Loops = total time spent on an operation.
However, from a JSON plan I produced:
"Plans": [
{
"Node Type": "Hash Join",
"Parent Relationship": "Outer",
"Parallel Aware": false,
"Join Type": "Inner",
"Startup Cost": 66575.34,
"Total Cost": 76861.82,
"Plan Rows": 407,
"Plan Width": 290,
"Actual Startup Time": 49962.789,
"Actual Total Time": 51206.643,
"Actual Rows": 127117,
"Actual Loops": 3,
"Output": [ ... ],
...
"Execution Time": 52677.398
(The complete plan is here.)
Actual Total Time * Actual Loops = 51 sec * 3 = 2 min 33 sec clearly exceeds the Execution Time of 52.7 seconds.
Am I understanding the documentation correctly?
If so, shouldn't it say, "we spent a total of 0.01 milliseconds executing the index scans on tenk2"?
A: Your Hash Join is underneath a Gather node:
Gather (cost=67,575.34..77,959.52 rows=977 width=290) (actual time=51,264.085..52,595.474 rows=381,352 loops=1)
Buffers: shared hit=611279 read=99386
-> Hash Join (cost=66,575.34..76,861.82 rows=407 width=290) (actual time=49,962.789..51,206.643 rows=127,117 loops=3)
Buffers: shared hit=611279 read=99386
That means that The query started two background workers which ran in parallel with the main backend to complete the hash join (see the "Workers Launched": 2 in the execution plan).
Now it is evident that if three processes work on a task, the total execution time will not be the sum of the individual execution times.
In other words, the rule about multiplication of execution time with the number of loops holds for nested loop join, (which is single threaded), but not for parallel execution of a query.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/55329262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Load data from csv file (iphone SDk) Does anyone know how to load data from csv file?
For the code example,
CPTestAppScatterPlotController.m
that I downloaded from core-plot Google website, a line graph can be plotted based on randomly
generated initial data x, y.
// Add some initial data
SMutableArray *contentArray = [NSMutableArray arrayWithCapacity:100];
NSUInteger i;
for ( i = 0; i < 60; i++ ) {
id x = [NSNumber numberWithFloat:1+i*0.05];
id y = [NSNumber numberWithFloat:1.2*rand()/(float)RAND_MAX + 1.2];
[contentArray addObject:[NSMutableDictionary
dictionaryWithObjectsAndKeys:x, @"x", y, @"y", nil]];
}
self.dataForPlot = contentArray;
Then i modified the code,
NSString *filePath = [[NSBundle mainBundle] pathForResource:@"ECG_Data" ofType:@"csv"];
NSString *myText = [NSString stringWithContentsOfFile:filePath encoding:NSUTF8StringEncoding error:nil ];
NSScanner *scanner = [NSScanner scannerWithString:myText];
[scanner setCharactersToBeSkipped:[NSCharacterSet characterSetWithCharactersInString:@"\n, "]];
NSMutableArray *newPoints = [NSMutableArray array];
float time, data;
while ( [scanner scanFloat:&time] && [scanner scanFloat:&data] ) {
[newPoints addObject:
[NSMutableDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:time], @"time",
[NSNumber numberWithFloat:data], @"data",
nil]];
}
self.dataForPlot = newPoints;
It seems that my code could not read the data from csv file.
(there are two cols of the data in ECG_Data.csv, one for time and one for data)
Can anyone give me some suggestion???
Thanks!!!!
A: http://cocoawithlove.com/2009/11/writing-parser-using-nsscanner-csv.html
Here is a good place to start for creating a CSV parser. It's complete with sample code and user comments.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2426374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Preserve the original payload when calling a web service I am struggling to figure out how to preserve a payload so that it is available after calling a web service within a sequence.
For example in the following sequence, after the βcallβ mediator fires the payload changes to what has been returned by the web service.
What I am looking to do is to enrich the original payload with the data that has been returned from the web service call.
All help is very much appreciated.
<log level="full"/>
<payloadFactory media-type="xml">
<format>
<Flight xmlns="">
<location_id>$1</location_id>
<FlightDistance/>
<Aircraft>
<AircraftAbbr/>
<LandingDistance/>
<TakeoffDistance/>
<AircraftRange/>
<AirframeHours/>
</Aircraft>
<Runways>
<Airport/>
</Runways>
</Flight>
</format>
<args>
<arg evaluator="xml" expression="get-property('OriginAirport')"/>
</args>
</payloadFactory>
<log level="full">
<property expression="get-property('OriginalPayload')" name="OriginalPayload"/>
</log>
<call blocking="true" description="">
<endpoint key="GetRunways"/>
</call>
<foreach expression="//d:Entries/d:Entry" id="feid" xmlns:d="http://ws.wso2.org/dataservice">
<sequence>
<log description="" level="full">
<property name="marker" value="marker"/>
</log>
<property expression="$body/Entry/runway_length" name="RunwayLength" scope="default" type="STRING"/>
<enrich>
<source clone="true" property="RunwayLength" type="property"/>
<target action="child" property="RunwayLength" type="property"/>
</enrich>
<log>
<property expression="get-property('RunwayLength')" name="PropertyValue"/>
</log>
</sequence>
</foreach>
A: use enrich mediator and store the payload in to property
<enrich>
<source type="body"/>
<target type="property" property="REQUEST_PAYLOAD"/>
</enrich>
https://docs.wso2.com/display/ESB481/Enrich+Mediator
A: To complete @Jenananthan answer:
*
*Store original payload in a property
*Call the webservice
*Restore the original payload to body:
<enrich>
<source clone="false" type="property" property="ORIGINAL_PAYLOAD"/>
<target action="replace" type="body"/>
</enrich>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/41335928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Promise keeps pending and "then" or "catch" are not triggering. ReactJS I'm catching up to learn React. If you have any tips on this code, please share, it is very important to me :).
Why this promise doesn't work?
In console show status pending.
The promise:
_server.get(`/select`, (req, res) => {
return DefaultSQL.select(table).then((result) => {
return result
}).catch((err) => {
console.log(err)
});
})
Where I call the promise:
export const getData = (url) => {
const response = Server.get(url) status
return response.then((result) => { // <- Not firing
return result
}).catch((err) => { // <- Not firing too
console.log(err);
})
}
What the promise call:
class DefaultSQL {
select(table) {
const sql = `SELECT * FROM ${table}`;
return Db.conn.query(sql).then((result) =>
result
).catch((err) => {
console.log(err)
});
}
...
EDIT:
When I change the "return result" to "return res.json(result)" the promise does not keep pending, but where I call the promise gets a error saying that "response" is not a function
A: You should rethrow the error/throw a new error to catch error again.
It's an example:
Promise.reject("throwed on demo 1")
.catch((e) => {
console.log("Catched", e)
})
.catch((e) => {
// unreached block
console.log("Can NOT recatch", e)
})
Promise.reject("throwed on demo 2")
.catch((e) => {
console.log("Catched", e)
throw e
})
.catch((e) => {
console.log("Recatched", e)
})
UPDATE: Another issue, you need to send a response even error/success. Otherwise, the request will not respond, the client will be pending forever.
_server.get(`/select`, (req, res) => {
return DefaultSQL.select(table).then((result) => {
res.send(result)
//return result
}).catch((err) => {
res.status(500).send(err)
console.log(err)
});
})
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/58887967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What is the purpose of setting "this.name = name;" in JavaScript? I'm not exactly sure why this is necessary. If I'm taking in a parameter of "name", why can't it be referenced as "name" without first setting it equal to "this.name"?
Forgive my ignorance, I'm new.
function Animal(name, numLegs) {
this.name = name;
this.numLegs = numLegs;
}
Animal.prototype.sayName = function() {
console.log("Hi my name is " + this.name);
};
A: this.name represents the instance variable while the name variable is a parameter that is in the scope of the function (constructor in this case).
With this assignment, the value of the local variable is assigned to the instance variable.
A: This is not a javascript thing, it's an OOP thing. You want to leave the properties of an object isolated from other scopes.
var Animal = function(name) {
this.name = name;
};
var name = "foo";
var dog = new Animal("bar");
// shows "foo"
console.log(name);
// shows "bar"
console.log(dog.name);
A: As use of the pronoun βhe.β We could have written this: βJohn is running fast because John is trying to catch the train.β We donβt reuse βJohnβ in this manner, . In a similar graceful manner, in JavaScript, we use the this keyword as a shortcut, a referent; it refers to an object; that is, the subject in context, or the subject of the executing code. Consider this example:
var person = {
firstName: "Penelope",
lastName: "Barrymore",
fullName: function ()
β// Notice we use "this" just as we used "he" in the example sentence earlier?:β
console.log(this.firstName + " " + this.lastName);
β // We could have also written this:
console.log(person.firstName + " " + person.lastName);
}
}
A: Well... you are working with an object constructor and prototype not just a function with two parameters where example:
function Animals(pet1, pet2) {
var pets = "first pet " + pet1 + " second pet " + pet2;
console.log(pets);
}
Animals("Tiger", "Lion");
so referencing to your parameter as 'this.name' is a prototype of the sayName() if you know what i mean.
FOR MORE.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/41745585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Using variables instead of fieldnames in mysql select query IΒ΄ve searched a lot but I canΒ΄t figure out how to do it, if itΒ΄s possible...
I have this table:
CREATE TABLE bilanci (
id int AUTO_INCREMENT NOT NULL,
medicoid int NOT NULL,
`1` int NOT NULL DEFAULT 0,
`2` int NOT NULL DEFAULT 0,
`3` int NOT NULL DEFAULT 0,
`4` int NOT NULL DEFAULT 0,
`5` int NOT NULL DEFAULT 0,
`6` int NOT NULL DEFAULT 0,
`7` int NOT NULL DEFAULT 0,
`8` int NOT NULL DEFAULT 0,
`9` int NOT NULL DEFAULT 0,
`10` int NOT NULL DEFAULT 0,
`11` int NOT NULL DEFAULT 0,
`12` int NOT NULL DEFAULT 0,
conguagliodic decimal(10,2),
totbilancianno int DEFAULT 0,
totpagato decimal(12,2),
totdapagare decimal(12,2),
conguaglio decimal(10,2),
rifanno int NOT NULL,
pvimun decimal(10,4) NOT NULL DEFAULT 9.4432,
PRIMARY KEY (id)
) ENGINE = InnoDB;
The fileds named with numbers correspond to months and I need to have a select like:
select medicoid, (month(curdate()) -2), totdapagare from bilanci
Where (month(curdate()) -2) correspond to the field I need to select.
Is this possible?
A: I would suggest you to normalize your database structure, you could have one table like this:
CREATE TABLE bilanci (
id int AUTO_INCREMENT NOT NULL,
medicoid int NOT NULL,
conguagliodic decimal(10,2),
totbilancianno int DEFAULT 0,
totpagato decimal(12,2),
totdapagare decimal(12,2),
conguaglio decimal(10,2),
pvimun decimal(10,4) NOT NULL DEFAULT 9.4432,
PRIMARY KEY (id)
) ENGINE = InnoDB;
and a second table bilanci_month:
create table bilanci_month (
id int auto_increment,
bilanci_id int,
rifanno int NOT NULL,
month int NOT NULL,
value int)
(bilanci_id can be defined as a foreign key of bilanci_month), then your select query would be like this:
select
b.medicoid,
coalesce(bm.value, 0),
b.totdapagare from bilanci
from
bilanci b left join bilanci_month bm
on b.id = bm.bilanci_id and bm.month = month(curdate())-2
also, be careful about month(curdate())-2, what would happen if the month is january or february? You have to implement some logic to get for example november or december of the previous year (and add this logic to your join).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/40463785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Querying available locations in Azure SDK for node I would like to list the locations available for a particular subscription, pretty much in the same way that it's done with az account list-locations from the command line, or like it's listed here for Python
However, I can't find a straighforward way to do that. The example here apparently creates a virtual machine, but is not too well documented.
I can't even get past the initial step of providing credentials to use my account (this could be it, but it's geared towards working with compute service; I don't know how it can be translated into simple management).
Is there any step-by-step tutorial I have missed?
The azure-mgmt package seems clear enough, but it seems also a bit outdated and it's not clear if it's current anymore.
A: Pretty sure you need this to auth: https://learn.microsoft.com/es-es/javascript/api/azure-arm-resource/subscriptionclient?view=azure-node-latest
and this call to get locations: https://learn.microsoft.com/es-es/javascript/api/azure-arm-resource/locationlistresult?view=azure-node-latest
Node SDK repo: https://github.com/Azure/azure-sdk-for-node
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/53692957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Android 4.2.2, How to enable TLS 1.2 for HttpsURLConnection? I am developing an Android app for API level 16+ and from what i have seen by default TLS v1.1 and TLS v1.2 are supported but not enabled on Android 4.1+
I have used the SSLSocketFacory from here:
http://blog.dev-area.net/2015/08/13/android-4-1-enable-tls-1-1-and-tls-1-2/
Which i have managed to get working with the following example:
public class TLSSocketFactory extends SSLSocketFactory {
private SSLSocketFactory internalSSLSocketFactory;
public TLSSocketFactory() throws KeyManagementException, NoSuchAlgorithmException {
SSLContext context = SSLContext.getInstance("TLS");
context.init(null, null, null);
internalSSLSocketFactory = context.getSocketFactory();
}
@Override
public String[] getDefaultCipherSuites() {
return internalSSLSocketFactory.getDefaultCipherSuites();
}
@Override
public String[] getSupportedCipherSuites() {
return internalSSLSocketFactory.getSupportedCipherSuites();
}
public Socket createSocket() throws IOException{
return enableTLSOnSocket(internalSSLSocketFactory.createSocket());
}
@Override
public Socket createSocket(Socket s, String host, int port, boolean autoClose) throws IOException {
return enableTLSOnSocket(internalSSLSocketFactory.createSocket(s, host, port, autoClose));
}
@Override
public Socket createSocket(String host, int port) throws IOException, UnknownHostException {
return enableTLSOnSocket(internalSSLSocketFactory.createSocket(host, port));
}
@Override
public Socket createSocket(String host, int port, InetAddress localHost, int localPort) throws IOException, UnknownHostException {
return enableTLSOnSocket(internalSSLSocketFactory.createSocket(host, port, localHost, localPort));
}
@Override
public Socket createSocket(InetAddress host, int port) throws IOException {
return enableTLSOnSocket(internalSSLSocketFactory.createSocket(host, port));
}
@Override
public Socket createSocket(InetAddress address, int port, InetAddress localAddress, int localPort) throws IOException {
return enableTLSOnSocket(internalSSLSocketFactory.createSocket(address, port, localAddress, localPort));
}
private Socket enableTLSOnSocket(Socket socket) {
if(socket != null && (socket instanceof SSLSocket)) {
((SSLSocket)socket).setEnabledProtocols(new String[] {"TLSv1.1", "TLSv1.2"});
}
return socket;
}
}
and i am using the following to call it:
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
try{
SSLContext context = SSLContext.getDefault();
TLSSocketFactory factory = new TLSSocketFactory();
SSLSocket socket = (SSLSocket)factory.createSocket();
String[] protocols = socket.getSupportedProtocols();
// this now has the correct protocols enabled for sockets
String[] enabled = socket.getEnabledProtocols();
HttpsURLConnection.setDefaultSSLSocketFactory(new TLSSocketFactory());
URL url = new URL("https://mysslsite/");
HttpsURLConnection connection = (HttpsURLConnection) url.openConnection();
connection.setSSLSocketFactory(new TLSSocketFactory());
Log.i("", "");
}catch(Exception ex){
ex.printStackTrace();
}
}
}
I can see for a socket connection i now have the correct enabled protocols, however how do i enable these protocols for a url connection? how would i also verify that these protocols have been enabled for the HttpsURLConnection?
I have attempted to set the ssl socket factory for the connection but im not sure how to verify that the protocols have been enabled?
UPDATE:
I have modified my code as follows:
String myURL = "https://mysslservice";
SSLContext sslcontext = SSLContext.getInstance("TLS");
sslcontext.init(null, null, null);
SSLSocketFactory noSSLv3Factory = new NoSSLv3SocketFactory(sslcontext.getSocketFactory());
HttpsURLConnection.setDefaultSSLSocketFactory(noSSLv3Factory);
URL url = new URL(myURL);
HttpsURLConnection l_connection = (HttpsURLConnection) url.openConnection();
l_connection.connect();
I have used the following implementation of the SSL factory
public class NoSSLv3SocketFactory extends SSLSocketFactory{
private final SSLSocketFactory delegate;
public NoSSLv3SocketFactory() {
this.delegate = HttpsURLConnection.getDefaultSSLSocketFactory();
}
public NoSSLv3SocketFactory(SSLSocketFactory delegate) {
this.delegate = delegate;
}
@Override
public String[] getDefaultCipherSuites() {
return delegate.getDefaultCipherSuites();
}
@Override
public String[] getSupportedCipherSuites() {
return delegate.getSupportedCipherSuites();
}
private Socket makeSocketSafe(Socket socket) {
if (socket instanceof SSLSocket) {
socket = new NoSSLv3SSLSocket((SSLSocket) socket);
}
return socket;
}
@Override
public Socket createSocket(Socket s, String host, int port, boolean autoClose) throws IOException {
return makeSocketSafe(delegate.createSocket(s, host, port, autoClose));
}
@Override
public Socket createSocket(String host, int port) throws IOException {
return makeSocketSafe(delegate.createSocket(host, port));
}
@Override
public Socket createSocket(String host, int port, InetAddress localHost, int localPort) throws IOException {
return makeSocketSafe(delegate.createSocket(host, port, localHost, localPort));
}
@Override
public Socket createSocket(InetAddress host, int port) throws IOException {
return makeSocketSafe(delegate.createSocket(host, port));
}
@Override
public Socket createSocket(InetAddress address, int port, InetAddress localAddress, int localPort) throws IOException {
return makeSocketSafe(delegate.createSocket(address, port, localAddress, localPort));
}
private class NoSSLv3SSLSocket extends DelegateSSLSocket {
private NoSSLv3SSLSocket(SSLSocket delegate) {
super(delegate);
}
@Override
public void setEnabledProtocols(String[] protocols) {
if (protocols != null && protocols.length == 1 && "SSLv3".equals(protocols[0])) {
List<String> enabledProtocols = new ArrayList<String>(Arrays.asList(delegate.getSupportedProtocols()));
if (enabledProtocols.size() > 1) {
enabledProtocols.remove("SSLv3");
System.out.println("Removed SSLv3 from enabled protocols");
} else {
System.out.println("SSL stuck with protocol available for " + String.valueOf(enabledProtocols));
}
protocols = enabledProtocols.toArray(new String[enabledProtocols.size()]);
}
super.setEnabledProtocols(protocols);
}
}
public class DelegateSSLSocket extends SSLSocket {
protected final SSLSocket delegate;
DelegateSSLSocket(SSLSocket delegate) {
this.delegate = delegate;
}
@Override
public String[] getSupportedCipherSuites() {
return delegate.getSupportedCipherSuites();
}
@Override
public String[] getEnabledCipherSuites() {
return delegate.getEnabledCipherSuites();
}
@Override
public void setEnabledCipherSuites(String[] suites) {
delegate.setEnabledCipherSuites(suites);
}
@Override
public String[] getSupportedProtocols() {
return delegate.getSupportedProtocols();
}
@Override
public String[] getEnabledProtocols() {
return delegate.getEnabledProtocols();
}
@Override
public void setEnabledProtocols(String[] protocols) {
delegate.setEnabledProtocols(protocols);
}
@Override
public SSLSession getSession() {
return delegate.getSession();
}
@Override
public void addHandshakeCompletedListener(HandshakeCompletedListener listener) {
delegate.addHandshakeCompletedListener(listener);
}
@Override
public void removeHandshakeCompletedListener(HandshakeCompletedListener listener) {
delegate.removeHandshakeCompletedListener(listener);
}
@Override
public void startHandshake() throws IOException {
delegate.startHandshake();
}
@Override
public void setUseClientMode(boolean mode) {
delegate.setUseClientMode(mode);
}
@Override
public boolean getUseClientMode() {
return delegate.getUseClientMode();
}
@Override
public void setNeedClientAuth(boolean need) {
delegate.setNeedClientAuth(need);
}
@Override
public void setWantClientAuth(boolean want) {
delegate.setWantClientAuth(want);
}
@Override
public boolean getNeedClientAuth() {
return delegate.getNeedClientAuth();
}
@Override
public boolean getWantClientAuth() {
return delegate.getWantClientAuth();
}
@Override
public void setEnableSessionCreation(boolean flag) {
delegate.setEnableSessionCreation(flag);
}
@Override
public boolean getEnableSessionCreation() {
return delegate.getEnableSessionCreation();
}
@Override
public void bind(SocketAddress localAddr) throws IOException {
delegate.bind(localAddr);
}
@Override
public synchronized void close() throws IOException {
delegate.close();
}
@Override
public void connect(SocketAddress remoteAddr) throws IOException {
delegate.connect(remoteAddr);
}
@Override
public void connect(SocketAddress remoteAddr, int timeout) throws IOException {
delegate.connect(remoteAddr, timeout);
}
@Override
public SocketChannel getChannel() {
return delegate.getChannel();
}
@Override
public InetAddress getInetAddress() {
return delegate.getInetAddress();
}
@Override
public InputStream getInputStream() throws IOException {
return delegate.getInputStream();
}
@Override
public boolean getKeepAlive() throws SocketException {
return delegate.getKeepAlive();
}
@Override
public InetAddress getLocalAddress() {
return delegate.getLocalAddress();
}
@Override
public int getLocalPort() {
return delegate.getLocalPort();
}
@Override
public SocketAddress getLocalSocketAddress() {
return delegate.getLocalSocketAddress();
}
@Override
public boolean getOOBInline() throws SocketException {
return delegate.getOOBInline();
}
@Override
public OutputStream getOutputStream() throws IOException {
return delegate.getOutputStream();
}
@Override
public int getPort() {
return delegate.getPort();
}
@Override
public synchronized int getReceiveBufferSize() throws SocketException {
return delegate.getReceiveBufferSize();
}
@Override
public SocketAddress getRemoteSocketAddress() {
return delegate.getRemoteSocketAddress();
}
@Override
public boolean getReuseAddress() throws SocketException {
return delegate.getReuseAddress();
}
@Override
public synchronized int getSendBufferSize() throws SocketException {
return delegate.getSendBufferSize();
}
@Override
public int getSoLinger() throws SocketException {
return delegate.getSoLinger();
}
@Override
public synchronized int getSoTimeout() throws SocketException {
return delegate.getSoTimeout();
}
@Override
public boolean getTcpNoDelay() throws SocketException {
return delegate.getTcpNoDelay();
}
@Override
public int getTrafficClass() throws SocketException {
return delegate.getTrafficClass();
}
@Override
public boolean isBound() {
return delegate.isBound();
}
@Override
public boolean isClosed() {
return delegate.isClosed();
}
@Override
public boolean isConnected() {
return delegate.isConnected();
}
@Override
public boolean isInputShutdown() {
return delegate.isInputShutdown();
}
@Override
public boolean isOutputShutdown() {
return delegate.isOutputShutdown();
}
@Override
public void sendUrgentData(int value) throws IOException {
delegate.sendUrgentData(value);
}
@Override
public void setKeepAlive(boolean keepAlive) throws SocketException {
delegate.setKeepAlive(keepAlive);
}
@Override
public void setOOBInline(boolean oobinline) throws SocketException {
delegate.setOOBInline(oobinline);
}
@Override
public void setPerformancePreferences(int connectionTime, int latency, int bandwidth) {
delegate.setPerformancePreferences(connectionTime, latency, bandwidth);
}
@Override
public synchronized void setReceiveBufferSize(int size) throws SocketException {
delegate.setReceiveBufferSize(size);
}
@Override
public void setReuseAddress(boolean reuse) throws SocketException {
delegate.setReuseAddress(reuse);
}
@Override
public synchronized void setSendBufferSize(int size) throws SocketException {
delegate.setSendBufferSize(size);
}
@Override
public void setSoLinger(boolean on, int timeout) throws SocketException {
delegate.setSoLinger(on, timeout);
}
@Override
public synchronized void setSoTimeout(int timeout) throws SocketException {
delegate.setSoTimeout(timeout);
}
@Override
public void setTcpNoDelay(boolean on) throws SocketException {
delegate.setTcpNoDelay(on);
}
@Override
public void setTrafficClass(int value) throws SocketException {
delegate.setTrafficClass(value);
}
@Override
public void shutdownInput() throws IOException {
delegate.shutdownInput();
}
@Override
public void shutdownOutput() throws IOException {
delegate.shutdownOutput();
}
@Override
public String toString() {
return delegate.toString();
}
@Override
public boolean equals(Object o) {
return delegate.equals(o);
}
}
}
however i am getting the following exception:
javax.net.ssl.SSLHandshakeException: javax.net.ssl.SSLProtocolException: SSL handshake aborted: ssl=0xb838faa0: Failure in SSL library, usually a protocol error
error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure (external/openssl/ssl/s23_clnt.c:741 0x9da83901:0x00000000)
The SSL service has only TLSv1.2 enabled and SSLv3 has been disabled, however app seems to be still using sslv3?
A: I solved this issue following the indication provided in the article http://blog.dev-area.net/2015/08/13/android-4-1-enable-tls-1-1-and-tls-1-2/ with few changes.
SSLContext context = SSLContext.getInstance("TLS");
context.init(null, null, null);
SSLSocketFactory noSSLv3Factory = null;
if (Build.VERSION.SDK_INT <= Build.VERSION_CODES.KITKAT) {
noSSLv3Factory = new TLSSocketFactory(sslContext.getSocketFactory());
} else {
noSSLv3Factory = sslContext.getSocketFactory();
}
connection.setSSLSocketFactory(noSSLv3Factory);
This is the code of the custom TLSSocketFactory:
public static class TLSSocketFactory extends SSLSocketFactory {
private SSLSocketFactory internalSSLSocketFactory;
public TLSSocketFactory(SSLSocketFactory delegate) throws KeyManagementException, NoSuchAlgorithmException {
internalSSLSocketFactory = delegate;
}
@Override
public String[] getDefaultCipherSuites() {
return internalSSLSocketFactory.getDefaultCipherSuites();
}
@Override
public String[] getSupportedCipherSuites() {
return internalSSLSocketFactory.getSupportedCipherSuites();
}
@Override
public Socket createSocket(Socket s, String host, int port, boolean autoClose) throws IOException {
return enableTLSOnSocket(internalSSLSocketFactory.createSocket(s, host, port, autoClose));
}
@Override
public Socket createSocket(String host, int port) throws IOException, UnknownHostException {
return enableTLSOnSocket(internalSSLSocketFactory.createSocket(host, port));
}
@Override
public Socket createSocket(String host, int port, InetAddress localHost, int localPort) throws IOException, UnknownHostException {
return enableTLSOnSocket(internalSSLSocketFactory.createSocket(host, port, localHost, localPort));
}
@Override
public Socket createSocket(InetAddress host, int port) throws IOException {
return enableTLSOnSocket(internalSSLSocketFactory.createSocket(host, port));
}
@Override
public Socket createSocket(InetAddress address, int port, InetAddress localAddress, int localPort) throws IOException {
return enableTLSOnSocket(internalSSLSocketFactory.createSocket(address, port, localAddress, localPort));
}
/*
* Utility methods
*/
private static Socket enableTLSOnSocket(Socket socket) {
if (socket != null && (socket instanceof SSLSocket)
&& isTLSServerEnabled((SSLSocket) socket)) { // skip the fix if server doesn't provide the TLS version
((SSLSocket) socket).setEnabledProtocols(new String[]{TLS_v1_1, TLS_v1_2});
}
return socket;
}
private static boolean isTLSServerEnabled(SSLSocket sslSocket) {
System.out.println("__prova__ :: " + sslSocket.getSupportedProtocols().toString());
for (String protocol : sslSocket.getSupportedProtocols()) {
if (protocol.equals(TLS_v1_1) || protocol.equals(TLS_v1_2)) {
return true;
}
}
return false;
}
}
You can also check the server certificate using online services like https://www.ssllabs.com/ssltest/analyze.html to be sure that the server has the certificate you are enabling
A: Your code already contains the line
HttpsURLConnection.setDefaultSSLSocketFactory(new TLSSocketFactory());
which sets your TLSSocketFactory for all connections in your app that are established afterwards. Therefore the call to connection.setSSLSocketFactory(new TLSSocketFactory()); is redundant and should have no effect.
Note: There may be an issue with already open connections that have been established before you call setDefaultSSLSocketFactory and that are cached internally and reused. Therefore it is recommended to set the default socket factory before you open any network connection.
A: UPDATE: This has now been resolved the issue was a SSL certificate issue, and can confirm that both methods are working for android api level 16+
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/37395321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Execute Lua WINAPI code without displaying the shell I created a simple Lua application using Alien for Lua. It works perfectly, except when you execute it, the Lua Shell shows as well. Is there a way to "hide" this shell, run it in background, turn it off, etc so that I simply see the message box?
Code:
require "luarocks.require"
require "alien"
local MessageBox = alien.User32.MessageBoxA
MessageBox:types{ret = "long", abi = "stdcall", "long", "string", "string", "long" }
MessageBox(0, "Hello World!", "My Window Title", 0x00000040)
Current Output:
Desired Output:
A: tl;dr
Rename your script to hello.wlua so that wlua.exe is used.
Details
While it is likely possible, if verbose, to locate and close the offending console window that Windows provided your process, it would be better if that console never appeared in the first place. If it does appear, then it is likely to flash on screen, and cause some users to be confused.
Subsystems
Windows has, since its earliest days, had the concept of a "subsystem" which each individual executable identifies with. Normal GUI applications are linked with /SUBSYSTEM:WINDOWS and get the full GUI treatment including the responsibility to create and display their own window(s) if and when needed.
Applications that expect to be run from a command line (or batch file) are linked with /SUBSYSTEM:CONSOLE, and as a result have standard file handles that are guaranteed to be open and are likely to be connected to some console window (or a pipe, or redirected to a file, but they do exist). That guarantee is strong enough that when a console program is started outside of a console (as when double-clicked from Exporer, or named in the Start|Run box) then the system automatically creates a console window for it, and binds the standard file handles to the new console.
There are other subsystems, but those two are the only important ones for normal users and developers.
lua.exe and wlua.exe
So why does this matter?
The stock lua.exe will be linked for the console, because that makes it possible to use interactively from a command prompt. However, it means that it will always be supplied with a console window even when you don't want one.
The Lua for Windows distribution (which from the pathname showing in your console's title bar it looks like you are using) includes a second copy named wlua.exe which only differs by being linked for the Windows subsystem. As a result, it only displays a window if the script explicitly creates one to display. Of course, it also means that it cannot be used interactively at the command prompt.
File types and associations
For convenience, you can associate the file type .wlua with wlua.exe, and name your GUI script with that file type. That will enable launching programs in the usual way without getting the extra consoles. Of course, when debugging them, you can always run them with lua.exe from a command prompt and take advantage of the existence of stdout and the utility of the print function.
On my PC (64-bit Win 7 Pro) I have the following associations, which look like they were created by the installation of Lua for Windows:
C:...>assoc .lua
.lua=Lua.Script
C:...>ftype lua.script
lua.script="C:\Program Files (x86)\Lua\5.1\lua.exe" "%1" %*
C:...>assoc .wlua
.wlua=wLua.Script
C:...>ftype Wlua.script
Wlua.script="C:\Program Files (x86)\Lua\5.1\wlua.exe" "%1" %*
Extra credit: PATHEXT
You could also add .lua to the PATHEXT environment variable to save typing the file type at the command prompt. I'm not configured that way presently, but have in the past done that. I found that the standard practice of naming both modules and scripts with the same file type made that less useful.
The PATHEXT environment variable lists the file types that will be searched for in the PATH when you name a program to run without specifying its file type. Documentation for this is rather hard to locate, as there does not appear to be a single MSDN page listing all the "official" environment variables and their usage. This chapter of a book about Windows NT has a nice description of the interaction of PATH and PATHEXT, and despite being subtly out of date in some respects, it is the clearest detailed explanation of how the command prompt operates that I've come across.
It clarifies that each folder in PATH is searched for each extension named in PATHEXT:
If the command name includes a file extension, the shell searches each directory for the exact file name specified by the command name. If the command name does not include a file extension, the shell adds the extensions listed in the PATHEXT environment variable, one by one, and searches the directory for that file name. Note that the shell tries all possible file extensions in a specific directory before moving on to search the next directory (if there is one).
It also documents how file types and associations interact with the command prompt. Despite its age, it is well worth the read.
A: Windows executables explicitly list the subsystem they run on. As the windows "lua.exe" is linked for the console subsystem, windows automagically creates a console window for it. Just relink "lua.exe" for gui subsystem, and you won't get to see the output any more unless you run it from a console window. BTW: Gui programs can programmatically create the console.
An alternative is closing the created console on start.
For that, you must first use SetStdHandle to redirect STDIN, STDOUT and STDERR (use a file open to device nul if you don't want it at all), and then call FreeConsole to finally dismiss your unloved console window. No sweat, you have "alien" set up already...
A: If you can use winapi module or have similar calls in Alien, you can find the handler of the console window and hide the window itself. The code would be similar to this:
require winapi
local pid = winapi.get_current_pid()
local wins = winapi.find_all_windows(function(w)
return w:get_process():get_pid() == pid
and w:get_class_name() == 'ConsoleWindowClass'
end)
for _,win in pairs(wins) do win:show_async(winapi.SW_HIDE) end
You'll need to check if this leave the MessageBox visible or not.
A: Programmatic solution (run the same script under wlua.exe if possible)
do
local i, j = 0, 0
repeat j = j + 1 until not arg[j]
repeat i = i - 1 until not arg[i-1]
local exe = arg[i]:lower()
-- check if the script is running under lua.exe
if exe:find('lua%.exe$') and not exe:find('wlua%.exe$') then
arg[i] = exe:gsub('lua%.exe$','w%0')
-- check if wlua.exe exists
if io.open(arg[i]) then
-- run the same script under wlua.exe
os.execute('"start "" "'..table.concat(arg,'" "',i,j-1)..'""')
-- exit right now to close console window
os.exit()
end
end
end
-- Your main program is here:
require "luarocks.require"
require "alien"
local MessageBox = alien.User32.MessageBoxA
MessageBox:types{ret = "long", abi = "stdcall", "long", "string", "string", "long" }
MessageBox(0, "Hello World!", "My Window Title", 0x00000040)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/22203882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: victoriametrics VictoriaMetrics is a Go-based open-source time series database and monitoring solution typically used for processing high volumes of data and long term storing.
Links
*
*Homepage
*Documentation
*Github
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/69152592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Success callback on $http.post in AngularJS using Firefox I have a simple CORS AJAX call from within AngularJS application with success callback:
$http({method:'POST',url:"http://0.0.0.0:4567/authenticate",
params: {Lusername:scope.Lusername,Lpassword:scope.Lpassword}})
.success(function(){alert("Success")})
When used in Safari it works fine: returns expected JSON object and shows the alert box. However in Firefox, although the JSON object is returned properly the success callback is not triggered.
Any idea why?
A: Make sure you handle the OPTIONS request in the server. If it returns 404 then Firefox wont call the next request (in your case the POST mentioned above).
A: Try this with last version of AngularJS:
$http.post("http://0.0.0.0:4567/authenticate", {
Lusername: $scope.Lusername,
Lpassword: $scope.Lpassword
}).success(function(data, status, headers, config) {
alert("Success");
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/17031990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How to do a python range with offset Is there a way to do the following offset using the range function?
range(1,100, offset=10)
10,11,12,13...
In other words, what is the best way to do an offset with a range?
A: When you include two parameters, the first parameter of range() is the first element of the range:
offset = 10
for i in range(offset, 100 + offset):
print(i)
If you don't want the end point to move by the offset too, simply remove the + offset from the second argument.
A: You can use map to apply the offset to the results of range at runtime:
offset = 10
for i in map(lambda x: x + offset, range(100)):
print(i)
will print
10
11
...
108
109
A: If you use numpy, there is a very simple and elegant solution:
import numpy as np
a = np.arange(100)
offset = 10
stride = 1
idx = np.r_[1:2, 3:4, 6:8]*stride+offset
print(a[idx])
## [11 13 16 17]
a[idx] = 0
print(a[:20])
## [ 0 1 2 3 4 5 6 7 8 9 10 0 12 0 14 15 0 0 18 19]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31794507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to store a string to file in android java code and call it for parsing in javascript I am trying to implement a jQuery Mobile application in which I am parsing an HTML code of a page to find only content of a div using div-id (in example main-content-inner). I am doing so in JavaScript which is stored in my index.html code as follows:
<script type="text/javascript">
window.onload = function() {
// var chck = document.getElementById("check");
// alert(chck.title);
$.ajax({
type: 'GET',
url: 'main.html',
dataType: 'html',
success: function(data) {
//cross platform xml object creation from w3schools
try //Internet Explorer
{
xmlDoc = new ActiveXObject("Microsoft.XMLDOM");
xmlDoc.async = "false";
xmlDoc.loadXML(data);
} catch (e) {
try // Firefox, Mozilla, Opera, etc.
{
parser = new DOMParser();
xmlDoc = parser.parseFromString(data, "text/xml");
} catch (e) {
alert(e.message);
return;
}
}
// var chck = document.getElementById("check");
// alert(chck.title);
var abc = document.getElementById("content1");
abc.innerHTML = (xmlDoc.getElementById("main-content-inner").innerHTML);
// alert(abc.innerHTML);
}
});
</script>
As jQuery cant call the HTML code of a remote page (Same Origin Policy), I am using java language code which I am using implementing on MainActivity.java of android and the HTML code is returned as a string (say i).
Now, I want to pass this "i" (from java) to javascript (in index.html) for parsing. What methods can I use to do that. Also I thought of storing the string in a file using FileWriter but I dont know how I can call the path of this new file in javascript (what to write in URL of JS).
A: One way to do this is with a Javascript interface, like so:
class JavaScriptInterface {
@JavascriptInterface
public String getFileContents(){
// read the file into a String and return it here.
return "the contents of your file";
}
}
Then, to set the Javascript Interface on the webview:
webView.getSettings().setJavaScriptEnabled(true);
webView.addJavascriptInterface(new JavaScriptInterface(), "android");
and in the js of the page inside the webview:
function getFileContents(){
var info = window.android.getFileContents();
return info;
}
This, however, is not asynchronous. if you want to implement this functionality in an async manner, you would instead:
*
*add a method in javascript called something like "populateFileContents(data)"
*call the window.android.getFileContents() from javascript,
*have that return nothing, only kick off (an asynctask probably) functionality that would read the file contents, then call, in the Java android code, webview.loadUrl("javascript:populateFileContents(data);"); - which would pass the data from the file back to the javascript, where you could do with it what you like.
A: Thanks #nebulae for answering.
However I solved my problem with some sort of digging around. I created a file using file writer and stored it in a folder in the internal SD card of Android (say folder name ABC file name XYZ).
I am then calling that file from my javascript using the absolute address of that file. The absolute address for the file ABC/XYZ is written in my JS as:
url: 'file:///storage/sdcard0/ABC/XYZ.txt',
This method does the trick.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/20300999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Android & Kotlin Coroutines: Is it possible to run out of threads? How to find out if I am running out of threads in Android/Kotlin?
I am building an app where I need to load a lot of data from a remote API. I add logs in my code to check for a thread name and I see at least 5 Workers running in parallel. The app has swipe-to-refresh feature and if I swipe too much, after a certain amount of calls I lose data somehow (although I do not get Error response from the server). I observe that the call I am interested starts at one worker and then this worker gets occupied by another process. Then the method never completes. I am a bit puzzled. Please help with any suggestions how to resolve the multi-threading issues. Changing Dispatcher.IO to the Dispatcher.Default does not have any big difference in behavior.
I can put all network calls one after another (in a sequential manner) - then I never lose any data even if I swipe to refresh 100 times. But then all calls are made on the same worker thread and I do not take advantage of parallelism. :-/
A: TL;DR: Is it possible to run out of threads when using coroutines? Well, the answer is no (deadlocks are another issue). But, is it possible to use coroutines in a way which means that your concurrency is bound by your number of threads? Yes.
I think the first thing you must understand is the difference between a blocking and non-blocking/suspending/async function.
A real suspending/non-blocking/async function which has some long running functionality, but properly yields control of execution until that long running task is complete is how you really leverage the concurrency that you get with coroutines. Let me demonstrate.
Muliple coroutines with an internal long running suspending function on 1 thread
val singleThread = Executors.newFixedThreadPool(1).asCoroutineDispatcher()
fun main() = runBlocking {
val start = System.currentTimeMillis()
val jobs = List(10) {
launch (singleThread){
delay(1000)
print(".")
}
}
jobs.forEach { it.join() }
val end = System.currentTimeMillis()
println()
println(end-start)
}
Here we have 10 coroutines that have been launched in quick succession on 1 thread. They all use the suspending function delay to simulate a long running task that takes 1000 milliseconds. But... the whole thing finishes in 1018 milliseconds. This will be a bit strange for someone familiar with pure thread based concurrency. Explanation to come. But just to make it absolutely clear, here is the same code, but using Thread.sleep instead of delay.
Multiple coroutines on 1 thread with internal long running blocking function
fun main() = runBlocking {
val start = System.currentTimeMillis()
val jobs = List(10) {
launch (singleThread){
Thread.sleep(1000)
print(".")
}
}
jobs.forEach { it.join() }
val end = System.currentTimeMillis()
println()
println(end-start)
}
This same bit of code, but with a blocking Thread.sleep took 10027 milliseconds. Each coroutine blocked the thread it was on, and so, our 10 coroutines actually executed in series. Control was not given back to the dispatcher while the long running function was being executed.
You can read a much more detail explanation of the difference between non-blocking suspension and blocking calls from Roman Elizarov here
In your case, I suspect that you are doing your retrieval of data using a blocking IO library. That means that each of those calls is blocking the thread it is on, and not yielding control to the dispatcher while the IO task is completing.
My recommendation would be:
*
*Carry on using Dispatchers.IO
*Start using a non blocking library to retrieve your data. I recommend ktor http client with the CIO engine.
But what about your data loss when you do things concurrently?
There is not enough information here to be sure, but, I think that you have not built your logic in a way that accounts for concurrency. In a truly parallel execution, swipe number 3 might complete before swipe 2 or swipe 1 completes. If your updates are not idempotent or you are delivering some partial set of data with each update request, then you could be processing update 3 before the others and ignoring update 1 and 2 when they do eventually arrive.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/55667201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to set a default value for a foreign key attribute for a table that does not exists yet in Django? I have a pre-existing Django Model class, in which I need to add a one-to one relationship with another model class that is not created yet in the DB yet.
I'm planning to fill the new model with some default values by cakking function in urls.py, but I'm not sure about the id of those values that'll be generated in the DB to assign it as a default value to the foreign key.
Is it a good idea if I hardcode the value 1 as default assuming that the record A for the new model will have a PK value of 1?
Is it a better approach to this?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/54379021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Streaming audio through Bluetooth I have a Bluetooth connection that works ok. When connected, I call this write function, that sends a stream to another device. It goes like this:
public void Write(byte[] bytes)
{
System.Threading.Tasks.Task.Run(() =>
{
int offset = 0;
int count = 10;
int len = bytes.Length;
while (offset < len)
{
try
{
mmOutStream.Write(bytes, offset, Math.Min(count, len - offset));
offset += count;
}
catch (IOException ex)
{
System.Diagnostics.Debug.WriteLine("Error occurred when sending data", ex);
}
}
}).ConfigureAwait(false);
}
This should stream a byte array of 10 bytes. Then On onother device, I call this read method:
public void Read()
{
System.Threading.Tasks.Task.Run(() =>
{
MediaPlayer player = new MediaPlayer();
try
{
byte[] myReadBuffer = new byte[1024];
int numberOfBytesRead = 0;
do
{
numberOfBytesRead = mmInStream.Read(myReadBuffer, 0, myReadBuffer.Length);
player.Prepared += (sender, e) =>
{
player.Start();
};
player.SetDataSource(new StreamMediaDataSource(new System.IO.MemoryStream(myReadBuffer)));
player.Prepare();
}
while (mmInStream.IsDataAvailable());
}
catch (IOException ex)
{
System.Diagnostics.Debug.WriteLine("Input stream was disconnected", ex);
}
}).ConfigureAwait(false);
}
The StreamMediaDataSource works fine, if I put the entire array in, but this return the Unable to resolve superclass of Lmd5c539bdc79f76d0c80e6cd44011eba829/StreamMediaDataSource; (388)
The method looks like this:
public class StreamMediaDataSource : MediaDataSource
{
System.IO.Stream data;
public StreamMediaDataSource(System.IO.Stream Data)
{
data = Data;
}
public override long Size
{
get
{
return data.Length;
}
}
public override int ReadAt(long position, byte[] buffer, int offset, int size)
{
data.Seek(position, System.IO.SeekOrigin.Begin);
return data.Read(buffer, offset, size);
}
public override void Close()
{
if (data != null)
{
data.Dispose();
data = null;
}
}
protected override void Dispose(bool disposing)
{
base.Dispose(disposing);
if (data != null)
{
data.Dispose();
data = null;
}
}
}
So how would I play the audio this way?
But, by using this answer, I get this error:
12-01 20:54:38.887 D/AbsListView(12444): Get MotionRecognitionManager
12-01 20:54:38.947 W/ResourceType(12444): Failure getting entry for 0x010802c9 (t=7 e=713) in package 0 (error -75)
12-01 20:54:45.073 V/BluetoothSocket.cpp(12444): initSocketNative
12-01 20:54:45.073 V/BluetoothSocket.cpp(12444): ...fd 53 created (RFCOMM, lm = 26)
12-01 20:54:45.073 V/BluetoothSocket.cpp(12444): initSocketFromFdNative
12-01 20:54:45.113 D/BluetoothUtils(12444): isSocketAllowedBySecurityPolicy start : device null
12-01 20:54:46.364 V/BluetoothSocket.cpp(12444): connectNative
12-01 20:54:46.424 V/BluetoothSocket.cpp(12444): ...connect(53, RFCOMM) = 0 (errno 115)
12-01 20:54:46.484 I/Choreographer(12444): Skipped 88 frames! The application may be doing too much work on its main thread.
12-01 20:54:51.669 V/MediaPlayer(12444): constructor
12-01 20:54:51.679 V/MediaPlayer(12444): setListener
12-01 20:54:51.719 W/dalvikvm(12444): Unable to resolve superclass of Lmd5c539bdc79f76d0c80e6cd44011eba829/StreamMediaDataSource; (388)
12-01 20:54:51.719 W/dalvikvm(12444): Link of class 'Lmd5c539bdc79f76d0c80e6cd44011eba829/StreamMediaDataSource;' failed
12-01 20:54:55.693 V/MediaPlayer(12444): constructor
12-01 20:54:55.693 V/MediaPlayer(12444): setListener
12-01 20:54:55.703 W/dalvikvm(12444): Unable to resolve superclass of Lmd5c539bdc79f76d0c80e6cd44011eba829/StreamMediaDataSource; (388)
12-01 20:54:55.703 W/dalvikvm(12444): Link of class 'Lmd5c539bdc79f76d0c80e6cd44011eba829/StreamMediaDataSource;' failed
Thread finished: <Thread Pool> #5
The thread 0x5 has exited with code 0 (0x0).
A: Well my main point is just passing the stream into MediaSource like below :
public void Read()
{
System.Threading.Tasks.Task.Run(() =>
{
MediaPlayer player = new MediaPlayer();
try
{
player.Prepared += (sender, e) =>
{
player.Start();
};
player.SetDataSource(new StreamMediaDataSource(mmInStream));
player.Prepare();
}
catch (IOException ex)
{
System.Diagnostics.Debug.WriteLine("Input stream was disconnected", ex);
}
}).ConfigureAwait(false);
}
And your in your MediaSource implementation remove seeking operation as it is a network stream. And if you want streaming you have to copy it to a MemoryStream;
public class StreamMediaDataSource : MediaDataSource
{
...
public override int ReadAt(long position, byte[] buffer, int offset, int size)
{
// data.Seek(position, System.IO.SeekOrigin.Begin); remove seeking
return data.Read(buffer, offset, size);
}
... // rest of your code remains same.
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/59125914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to convert a list into a csv file? I'm trying to write csv file from a list:
list:
newest = ['x11;y11;z11', 'x12;y12;z12', 'x13;y13;z13', 'x14;y14;z14', 'x15;y15;z15', 'x16;y16;z16', 'x17;y17;z17', 'x18;y18;z18', 'x19;y19;z19', 'x20;y20;z20']
My actual code:
with open(r'listtocsv.csv', 'w', newline='\n') as myfile:
wr = csv.writer(myfile, quoting=csv.QUOTE_ALL, delimiter=';')
wr.writerow(newest)
My actual result:
Result wanted:
A: import csv
newest = ['x11;y11;z11', 'x12;y12;z12', 'x13;y13;z13', 'x14;y14;z14', 'x15;y15;z15', 'x16;y16;z16', 'x17;y17;z17', 'x18;y18;z18', 'x19;y19;z19', 'x20;y20;z20']
new = []
for i in newest:
new.append(i.split(";"))
with open("file.csv", "w", newline="") as f:
writer = csv.writer(f)
writer.writerows(new)
output:
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/70816691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: UIImagePickerController in iOS7
How to get UIImagePickerController in iOS 7 Look like the same screen that show in the attached screen shot with out using Overlay Controller.
This is the code am using for picker controller.
UIImagePickerController *eImagePickerController = [[UIImagePickerController alloc] init];
eImagePickerController.delegate=self;
eImagePickerController.sourceType = UIImagePickerControllerSourceTypeCamera;
eImagePickerController.cameraDevice = UIImagePickerControllerSourceTypeCamera;
eImagePickerController.cameraCaptureMode = UIImagePickerControllerCameraCaptureModePhoto;
eImagePickerController.showsCameraControls = YES;
eImagePickerController.navigationBarHidden = NO;
eImagePickerController.cameraDevice=UIImagePickerControllerCameraDeviceRear;
eImagePickerController.wantsFullScreenLayout = NO;
eImagePickerController.cameraViewTransform = CGAffineTransformScale(eImagePickerController.cameraViewTransform, CAMERA_TRANSFORM_X, CAMERA_TRANSFORM_Y);
[self presentViewController:eImagePickerController animated:YES completion:nil];
The issue is shown in attached screenshot
A: The following code will present the UIImagePickerController in a way that resembles the screenshot given.
UIImagePickerController *eImagePickerController = [[UIImagePickerController alloc] init];
eImagePickerController.delegate = self;
eImagePickerController.sourceType = UIImagePickerControllerSourceTypeCamera;
eImagePickerController.cameraCaptureMode = UIImagePickerControllerCameraCaptureModePhoto;
eImagePickerController.cameraDevice= UIImagePickerControllerCameraDeviceRear;
eImagePickerController.showsCameraControls = YES;
eImagePickerController.navigationBarHidden = NO;
[self presentViewController:eImagePickerController animated:YES completion:nil];
Just make sure that you hide the status bar or it will be displayed in the UIImagePickerController as well.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/19833347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: EF Count() > 0 but First() throws exception I have faced a strange problem. When user comes to any page of my web app
I do check if user has permissions to access it, and provide trial period if its first time to come.
Here is my piece of code:
List<string> temp_workers_id = new List<string>();
...
if (temp_workers_id.Count > 6)
{
System.Data.SqlTypes.SqlDateTime sqlDate = new System.Data.SqlTypes.SqlDateTime(DateTime.Now.Date);
var rusers = dbctx.tblMappings.Where(tm => temp_workers_id.Any(c => c == tm.ModelID));
var permissions = dbctx.UserPermissions
.Where(p => rusers
.Any(ap => ap.UserID == p.UserID)
&& p.DateStart != null
&& p.DateEnd != null
&& p.DateStart <= sqlDate.Value
&& p.DateEnd >= sqlDate.Value);
if (permissions.Count() < 1)
{
permissions = dbctx.UserPermissions
.Where(p => rusers
.Any(ap => ap.UserID == p.UserID)
&& p.DateStart == null
&& p.DateEnd == null);
var used = dbctx.UserPermissions
.Where(p => rusers
.Any(ap => ap.UserID == p.UserID)
&& p.DateStart != null
&& p.DateEnd != null);
if (permissions.Count() > 0 && used.Count() < 1)
{
var p = permissions.First();
using (Models.TTTDbContext tdbctx = new Models.TTTDbContext())
{
var tp = tdbctx.UserPermissions.SingleOrDefault(tup => tup.UserID == p.UserID);
tp.DateStart = DateTime.Now.Date;
tp.DateEnd = DateTime.Now.Date.AddDays(60);
tdbctx.SaveChanges();
}
here the First() method throws exception:
Sequence contains no elements
how that even could be?
EDIT:
I dont think that user opens two browsers and navigate here at the same time, but could be the concurrency issue?
A: You claim you only found this in the server logs and didn't encounter it during debugging. That means that between these lines:
if (permissions.Count() > 0)
{
var p = permissions.First();
Some other process or thread changed your database, so that the query didn't match any documents anymore.
This is caused by permissions holding a lazily evaluated resource, meaning that the query is only executed when you iterate it (which Count() and First()) do.
So in the Count(), the query is executed:
SELECT COUNT(*) ... WHERE ...
Which returns, at that moment, one row. Then the data is modified externally, causing the next query (at First()):
SELECT n1, n2, ... WHERE ...
To return zero rows, causing First() to throw.
Now for how to solve that, is up to you, and depends entirely on how you want to model this scenario. It means the second query was actually correct: at that moment, there were no more rows that fulfilled the query criteria. You could materialize the query once:
permissions = query.Where(...).ToList()
But that would mean your logic operates on stale data. The same would happen if you'd use FirstOrDefault():
var permissionToApply = permissions.FirstOrDefault();
if (permissionToApply != null)
{
// rest of your logic
}
So it's basically a lose-lose scenario. There's always the chance that you're operating on stale data, which means that the next code:
tdbctx.UserPermissions.SingleOrDefault(tup => tup.UserID == p.UserID);
Would throw as well. So every time you query the database, you'll have to write the code in such a way that it can handle the records not being present anymore.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/40545887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Why is my RecyclerView empty even though I'm getting the data from Firestore? I'm following this short YT video (https://www.youtube.com/watch?v=Ly0xwWlUpVM). I followed his code and it works.
But, I'm trying to use a slightly different approach that I saw somewhere else to load the collection from Firestore.
Here's the function that works:
private fun EventChangeListener() {
db = FirebaseFirestore.getInstance()
db.collection("tournaments")
.addSnapshotListener(object: EventListener<QuerySnapshot>{
override fun onEvent(value: QuerySnapshot?, error: FirebaseFirestoreException?) {
if (error != null) { return }
Log.i ("BEFORE", tournamentsList.size.toString())
for (dc: DocumentChange in value?.documentChanges!!) {
if (dc.type == DocumentChange.Type.ADDED) {
tournamentsList.add(dc.document.toObject(Tournament::class.java))
}
}
Log.i ("AFTER ", tournamentsList.size.toString())
tournamentAdapter.notifyDataSetChanged()
}
})
}
I'm trying to use this code:
private fun EventChangeListener() {
db = FirebaseFirestore.getInstance()
db.collection("tournaments").addSnapshotListener { snapshot, error ->
if (error != null) { return@addSnapshotListener }
Log.i ("BEFORE", tournamentsList.size.toString())
tournamentsList = snapshot!!.toObjects(Tournament::class.java) as ArrayList<Tournament>
Log.i ("AFTER ", tournamentsList.size.toString())
tournamentAdapter.notifyDataSetChanged()
}
}
I thought these were the same thing, with the second able to read the entire collection in one line, rather than having to loop through the collection one document at a time.
When I add debug statements to the version that uses toObjects, I can confirm that tournamentsList.size is 0 before the call and it's 12 after the call. Yet the activity remains blank.
What am I doing wrong here?
EDIT: The rest of my code (from the YT video, but adapted for my data class and firestore database):
// Tournament.kt
enum class Gender { Female, Male }
enum class Venue { Franchises, Γvaux }
enum class Category { Womens, Mens, Mixed }
enum class Status { RegistrationOpen, RegistrationClosed, TournamentStarted, TournamentEnded, TournamentCanceled }
data class Team (
var dateAdded: Date = Date(),
var ranking: Int = 0,
var players: ArrayList<String> = ArrayList()
)
data class Player (
var id: String? = null,
var userID: String = "",
var dateAdded: Date = Date(),
var gender: Gender = Gender.Female,
var ranking: Int? = 0,
var partner: String? = ""
)
data class Tournament (
var season: String? = "2021",
var date: Date? = Date(),
var title: String? = "Tournoi de ",
var venue: Venue? = Venue.Franchises,
var category: Category? = Category.Mixed,
var status: Status? = Status.RegistrationOpen,
var maxNumberOfTeams: Int = 8,
var includeInRankings: Boolean = true,
var players: ArrayList<Player> = ArrayList(),
var teams: ArrayList<Team> = ArrayList(),
)
// TournamentListAdapter.kt
class TournamentListAdapter (private var tournamentList: ArrayList<Tournament>) : RecyclerView.Adapter<TournamentListAdapter.TournamentViewHolder>() {
class TournamentViewHolder (view: View) : RecyclerView.ViewHolder (view) {
val titleTV: TextView = view.findViewById(R.id.tournamenTitle)
val dateTV: TextView = view.findViewById(R.id.tournamenDate)
val venueTV: TextView = view.findViewById(R.id.tournamenVenue)
}
override fun getItemCount() : Int {
return tournamentList.size
}
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): TournamentViewHolder {
val itemView = LayoutInflater.from(parent.context).inflate(R.layout.list_item, parent, false)
return TournamentViewHolder(itemView)
}
override fun onBindViewHolder(holder: TournamentViewHolder, position: Int) {
val tournament = tournamentList[position]
holder.titleTV.text = tournament.title
holder.venueTV.text = tournament.venue.toString()
holder.dateTV.text = tournament.date.toString()
}
}
// MainActivity.kt
class MainActivity : AppCompatActivity() {
private lateinit var recyclerView: RecyclerView
private lateinit var tournamentsList: ArrayList<Tournament>
private lateinit var tournamentAdapter: TournamentListAdapter
private lateinit var db: FirebaseFirestore
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
tournamentsList = arrayListOf()
tournamentAdapter = TournamentListAdapter(tournamentsList)
recyclerView = findViewById(R.id.recyclerView)
recyclerView.layoutManager = LinearLayoutManager(this)
recyclerView.setHasFixedSize(true)
recyclerView.adapter = tournamentAdapter
EventChangeListener()
}
private fun EventChangeListener() {} // as defined above, option 1 and 2
// activity_main.xml
// not provided because, it works with Version 1, but not 2.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/68643483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Need to retrieve user coordinates in swift So I need to store the user's location coordinates in a database. So far i have created a viewcontroller.swift that has the database but I have mapkit attached to another viewcontroller where it prints out the coordinates. I was wondering how to get those coordinates and store on the database created in my first view controller.
Prints coordinates
func locationManager(_ manager: CLLocationManager, didUpdateLocations
locations: [CLLocation]) {
if let location = locations.first{
print(location.coordinate)
}
Need to input here
let building = buildingField.text?.trimmingCharacters(in:
.whitespacesAndNewlines)
if(building?.isEmpty)!{
print("Building is Empty")
}
if sqlite3_bind_text(stmt, 5, building, -1, nil) != SQLITE_OK{
print("Error binding building")
}
A: You can always access variables of another ViewController by creating an instance of that class in your current VC. In this case, you could create an instance of the VC in which the SQLite DB code exists in the MapViewController, and then assign the coordinates to a variable in the first VC. If you need to perform a task like writing to the Database with the coordinate being passed, then you could use the NSNotificationCenter class, which allows you to communicate between your View Controllers. The links below should give you a better idea as to what I'm talking about.
Ref:
StackOverflow post on passing values
Medium Tutorial on NSNotificationCenter
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/50283265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Open partial view in pop-up window I am trying to make a pop-up window open to display additionnal information to my users.
First of all, I'm kinda confused: what's the difference between Modal windows and pop-up windows?
Next, here's what I have done so far:
@Html.ActionLink(Model[i].mMasterItem.CARD_NAME, "SeeCardDetails", "Item", new { @_itemID = Model[i].mMasterItem.ITEM_LISTING_IDE }, new {@class = "modal"})
The modal class is merely a tag used in this action link to identify the link the call comes from.
Next, the controller partial view action:
public ActionResult SeeCardDetails(int? _itemID)
{
if (_itemID == null)
{
return RedirectToAction("Index", "Home");
}
if (_itemID == 0)
{
return RedirectToAction("Index", "Home");
}
CardDisplay cardToShow = mCardManager.GetCardDisplayByID((int)_itemID);
return PartialView(cardToShow);
}
And the partial view:
@model FinePlaySet.Utilities.CardDisplay
<p>This page shows!</p>
Nothing fancy, I'm just using this right now to build the frame.
And the jQuery/javascript/ajax/whatever try I made:
$('#dialog-modal').dialog({
autoOpen: false,
width: 400,
resizable: false,
modal:true
});
$('.modal').click(function() {
$('#dialog-modal').load(this.href, function() {
$(this).dialog('open');
});
return false;
});
And, lastly, I think I had to include a div which job is to actually open the pop-up / modal:
<div id="dialog-modal" title="See Card Detail"></div>
This div is located in my Layout page, so it's always there. Now I'm confused between all the tries I have made and all the stuff I have read and I need help: the basic need is only that when a user clicks a link, a CardDisplay item is loaded and then showed in a pop-up window. Can anyone help me out? Thanks!
A: A modal window is loaded via javascript in the same page. If you want to open a new browser window or tab (popup window) you should add the atribute type="_blank" in your link (no javascript needed).
<a href="popoup_url" target="_blank">Open popup</a>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/19908235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Functional testing coverage Tool on apache and jboss I'm looking for some tool which will provide me the code coverage of my functional tests (Not the unit testing code coverage ). To elaborate more, assume QA team executes their tests suites using selenium. At the end of the tests, I would like to know the amount of code (target code , not the test code base) got invoked / tested .
I found a similar post for .Net , but in my case the webserver is Apache and application server is jBoss
Coverage analysis for Functional Tests
Also, we have never done this type of analysis before, is this worth the effort, anyone who tried it ?
A: I used to do code coverage testing on assembly code and Java code. It is definitely worth it. You will find as you get the coverage close to 100% that it gets more and more difficult to construct tests for the remaining code.
You may even find code that you can prove can never be executed. You will find code on the fringes that has never been tested and you will be forced to run multi user tests to force race conditions to occur, assuming that the code had taken these into account.
On the assembly code, I had a 3000 line assembly program that took several months to test, but ran for 9 years without any bugs. Coverage testing proved its worth in that case as this code was deep inside a language interpreter.
As far as Java goes I used Clover: http://www.atlassian.com/software/clover/overview
This post: Open source code coverage libraries for JDK7? recommends Jacoco, but I've never tried it.
A: Thanks for the pointers @Peter Wooster. I did a lot of digging into the clover documentation but unfortunately there is not even a good indication that functional / integration is supported by clover, leave alone a good documentation.
Lucky I got to a link within the clover documentation itself which talks about this and looks promising (Thanks to Google search). I was using the ant so didn't even search in this Maven2 area. This talks about ant as well though :)
https://confluence.atlassian.com/display/CLOVER/Using+Clover+in+various+environment+configurations
I will be trying this , will update more on this soon !
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/14410116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: how to create N number of Kafka Template dynamicaly at run time - spring boot I have a spring boot application that needs to connect N number of Kafka clusters. based on some condition Kafka template need to switch and send a message
I have seen some solutions to create separate Kafka template beans but in my use case number of clusters will change at the deployment time
ex:
@Bean(name = "cluster1")
public KafkaTemplate<String, String> kafkaTemplatesample1() {
return new KafkaTemplate<>(devProducerFactory1());
}
@Bean(name = "cluster2")
public KafkaTemplate<String, String> kafkaTemplatesample2() {
return new KafkaTemplate<>(devProducerFactory2());
}
is there any other solution for this? if you can share a sample code its much appreciated
A: Let's assume that each cluster can be described with the following attributes:
@Getter
@Setter
public class KafkaCluster {
private String beanName;
private List<String> bootstrapServers;
}
For example, two clusters are defined in the application.properties:
kafka.clusters[0].bean-name=cluster1
kafka.clusters[0].bootstrap-servers=CLUSTER_1_URL
kafka.clusters[1].bean-name=cluster2
kafka.clusters[1].bootstrap-servers=CLUSTER_2_URL
Those properties are needed before beans are instantiated, to register KafkaTemplate beans' definitions, which makes @ConfigurationProperties unsuitable for this case. Instead, Binder API is used to bind them programmatically.
KafkaTemplate beans' definitions can be registered in the implementation of BeanDefinitionRegistryPostProcessor interface.
public class KafkaTemplateDefinitionRegistrar implements BeanDefinitionRegistryPostProcessor {
private final List<KafkaCluster> clusters;
public KafkaTemplateDefinitionRegistrar(Environment environment) {
clusters= Binder.get(environment)
.bind("kafka.clusters", Bindable.listOf(KafkaCluster.class))
.orElseThrow(IllegalStateException::new);
}
@Override
public void postProcessBeanDefinitionRegistry(BeanDefinitionRegistry registry) throws BeansException {
clusters.forEach(cluster -> {
GenericBeanDefinition beanDefinition = new GenericBeanDefinition();
beanDefinition.setBeanClass(KafkaTemplate.class);
beanDefinition.setInstanceSupplier(() -> kafkaTemplate(cluster));
registry.registerBeanDefinition(cluster.getBeanName(), beanDefinition);
});
}
@Override
public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException {
}
public ProducerFactory<String, String> producerFactory(KafkaCluster kafkaCluster) {
Map<String, Object> configProps = new HashMap<>();
configProps.put(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
kafkaCluster.getBootstrapServers());
configProps.put(
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
configProps.put(
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
public KafkaTemplate<String, String> kafkaTemplate(KafkaCluster kafkaCluster) {
return new KafkaTemplate<>(producerFactory(kafkaCluster));
}
}
Configuration class for the KafkaTemplateDefinitionRegistrar bean:
@Configuration
public class KafkaTemplateDefinitionRegistrarConfiguration {
@Bean
public static KafkaTemplateDefinitionRegistrar beanDefinitionRegistrar(Environment environment) {
return new KafkaTemplateDefinitionRegistrar(environment);
}
}
Additionally, exclude KafkaAutoConfiguration in the main class to prevent creating the default KafkaTemplate bean. This is probably not the best way because all the other KafkaAutoConfiguration beans are not created in that case.
@SpringBootApplication(exclude={KafkaAutoConfiguration.class})
Finally, below is a simple test that proves the existence of two KafkaTemplate beans.
@SpringBootTest
class SpringBootApplicationTest {
@Autowired
List<KafkaTemplate<String,String>> kafkaTemplates;
@Test
void kafkaTemplatesSizeTest() {
Assertions.assertEquals(kafkaTemplates.size(), 2);
}
}
For reference: Create N number of beans with BeanDefinitionRegistryPostProcessor, Spring Boot Dynamic Bean Creation From Properties File
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/74572441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How can I change enum based system to object oriented structure? I have a enum based and complicated structure. I want to change my structure with object oriented structure. You must know this there are too much states. So I searched in Internet and I found solutions like that
http://blogs.microsoft.co.il/gilf/2009/11/22/applying-strategy-pattern-instead-of-using-switch-statements/,
Ways to eliminate switch in code .
When I apply this solutions, there will be too much classes. What do you think about it, Should I apply like that.
A: Yes, definitely. You should go for the Strategy solution.
And in my experience, there is almost never a case of too much classes, as you put it. On the contrary, the more modular your code is, the easier it is to test/maintain/deploy it.
You'll run a lot in the opposite problem: a class you thought is small enough and there will be no reason to change, and then after a change in the requirements or a refactoring you see that you need to make it more modular.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/49173153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Alternatives for polling actions in MVC site? We are deploying an ASP.NET MVC site to a host that only allows us to runs sites and windows workflows.
We need to trigger events or poll in order to trigger certain actions.
Preferably we would not use polling since we have quite a bit of data to go through.
The trigger intervals range from everything from 12hours to 5 years but we do know exactly how long to wait for each trigger.
What options do we have here?
My first thought was to use NServiceBus and use future messages, but it turns out that we can not deploy this to the host. so no luck.
Can WF be used to trigger this kind of timed actions?
Ideas here?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/20173628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: visitor id mid is not consistent across domains when i login from app and then from moving from app to web Experience cloud vistor id is used. App, web are using same adobe launch library. When I login into app url change and mid changes and then if I navigate from app to web responsive page mid is changed and I am not seeing any cross-domain pathing report from app to web ?
anything do I need to do with s.cookiedomainperiod or anything to make this work?
A: The Experience Cloud Visitor ID is not automatically carried over from the native mobile app to a (mobile) web page. The long story short is native apps don't really store data locally in the same way as web browsers, so there's no automatic ability to use the same local storage mechanism/source between the two.
In order to do this, you must add some code to the mobile app to append the mid value to the target URL, e.g. :
Android
String urlString = "http://www.example.com/index.php";
String urlStringWithVisitorData = Visitor.appendToURL(urlString);
Intent browserIntent = new Intent(Intent.ACTION_VIEW, Uri.parse(urlStringWithVisitorData));
startActivity(browserIntent);
iOS
NSURL *url = [NSURL URLWithString:@βhttp://www.example.com/index.php"];
NSURL *urlWithVisitorData = [ADBMobile visitorAppendToURL:url];
[[UIApplication sharedApplication] openURL:urlWithVisitorData];
If implemented properly, you should now see a adobe_mc= parameter appended to the target URL. Then on page view of the target page, if you have the Adobe Analytics javascript and Experience Cloud Visitor ID libraries implemented, they will automatically look for and use that value instead of generate a new value (should not require any config / coding on this end).
Update:
@Ramaiyavraghvendra you made a comment:
Hi @Crayon, mny thanks for your profound answer. I am sorry that i
missed to inform that this app is not native one but this is a SPA
app. so the implementation of entire app is also done through launch.
Could you pl help in this case then.
I'm not entirely sure I understand your issue. If you are NOT moving from a native mobile app to web page, and your mobile app is really a web based SPA that outputs Launch as regular javascript code throughout the entire app, then you shouldn't have to do anything; the Experience Cloud ID service should carry over the id from page to page.
So it sounds to me like perhaps your Experience Cloud Visitor ID and/or Adobe Analytics collection server settings are not configured correctly. the cookie domain period variables may be an issue, if logging in involves moving from say www.mysite.com to www.mysite.co.uk or similar, but shouldn't be a problem if the TLD has the same # of periods.
Or, the trackingServer and trackingServerSecure variables may not be configured properly. In practice, I usually do not set trackingServerSecure at all. These variables get kind of confusing and IMO buggy in different scenarios vs. what you are using, so I tend to use the "secure" value in the trackingServer field and leave the trackingServerSecure blank, and then Experience Cloud Visitor ID and Adobe Analytics will just use the secure version 100% of the time.
Or..it could be a number of other config variables not properly set. It's hard to say if any of this is off, without access to the app and Launch container.
Also you may want to check the response headers for your logged in pages. It may be that they are configured to reject certain existing non-https cookies or something else that effectively causes the existing cookies to be unreadable and make the Experience Cloud ID service generate a new ID and cookies.
Or.. maybe your app kind of is a native mobile app but using an http wrapper to pull in web pages, so it is basically a web browser but it is effectively like moving from one web browser to another (e.g. starting on www.site.com/pageA on Chrome, and then copy/pasting that URL over to Internet Explorer to view). So effectively, different cookie jar.
Launch (or DTM) + Experience Cloud ID (Javascript methods)
In cases such as the last 2 paragraphs, you have to decorate your target links the same as my original answer, but using the Launch + Experience Cloud ID Service javascript syntax:
_satellite.getVisitorId().appendVisitorIDsTo('[your url here]');
You write some code to get the target URL of the link. Then run it through this code to return the url with the parameters added to them, and then you update your link with the new URL.
Super generic example that just updates all links on the page. In practice, you should only do this for relevant link(s) the visitor is redirected to.
var urls = document.querySelectorAll('a');
for (var i = 0, l = urls.length; i < l; i++) {
if (urls[i].href) {
urls[i].href = _satellite.getVisitorId().appendVisitorIDsTo(urls[i].href);
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/54578415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: warnings emitted during 'easy_install' When I easy_install some python modules, warnings such as:
<some module>: module references __file__
<some module>: module references __path__
<some module>: module MAY be using inspect.trace
<some module>: module MAY be using inspect.getsourcefile
sometimes get emitted.
Where (what package / source file) do these messages come from? Why is referencing __file__ or __path__ considered a bad thing?
A: easy_install doesn't like use of __file__ and __path__ not so much because they're dangerous, but because packages that use them almost always fail to run out of zipped eggs.
easy_install is warning because it'll install "less efficiently" into an unzipped directory instead of a zipped egg.
In practice, I'm usually glad when the zip_safe check fails, because then if I need to dive into the source of a module it's a ton easier.
A: I wouldn't worry about it. As durin42 notes, this just means that setuptools won't zip the egg when it puts it into site packages. If you don't want to see these messages, I believe you can just use the -Z flag to easy_install. That will make it always unzip the egg.
I recommend using pip. It gives you a lot less unnecessary output to deal with.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2298403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Get access token to Flipkart seller account using php Can anyone help to get access token by using Flipkart app id and app secret.
We have tried with the code below:
<?php
$username='appid';
$password='appsecret';
$url='https://api.flipkart.net/oauth-service/oauth/token\?grant_type\=client_credentials\&scope=Seller_Api';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_USERPWD, "$username:$password");
curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY);
$output = curl_exec($ch);
$status_code = curl_getinfo($ch, CURLINFO_HTTP_CODE); //get status code
$info = curl_getinfo($ch);
curl_close($ch);
if(curl_errno($ch)){
echo 'Curl error: ' . curl_error($ch);
}
print_r($output);
echo $status_code;
But we get the error:
{"error":"invalid_grant","error_description":"Unauthorized grant type: client_credentials"} 400
A: I ran through the same issue and after struggling for a couple of hours I went to my seller account and recreated my "Application Id" and "Application Secret". The only difference I made was I selected "self_access_application" instead of "third_party_application" this time and I was good to go.
Please refer: https://nimb.ws/sziWmA
Hope this helps
Thanks
A: You can try this code, i also faced the same issue.
$url = "https://api.flipkart.net/oauth-service/oauth/token?grant_type=client_credentials&scope=Seller_Api";
$curl = curl_init();
curl_setopt($curl, CURLOPT_USERPWD, config('constants.flipkart.application_id').":".config('constants.flipkart.secret_key'));
curl_setopt($curl, CURLOPT_URL,$url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
$result = curl_exec($curl);
$token = json_decode($result,true);
if(isset($token['access_token'])){
$this->access_token = $token['access_token'];
}
A: you can try this it will be helpful python/odoo developer
def flipkart_token_generation(self):
if not self.flipkart_sandbox_app_id or not self.flipkart_sandbox_cert_id:
raise UserError(_("Flipkart: cannot fetch OAuth token without credentials."))
else:
url = "https://sandbox-api.flipkart.net/oauth-service/oauth/token"
data = {'grant_type': 'client_credentials', 'scope': 'Seller_Api'}
response_json = requests.get(url, params=data, auth=(self.flipkart_sandbox_app_id, self.flipkart_sandbox_cert_id)).json()
self.env['ir.config_parameter'].sudo().set_param('flipkart_sandbox_token', response_json["access_token"])
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/58928652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Getting data from another website with jQuery Ajax I just tried to load data from www.somedomain.com's #SomeText Tag in #Data Tag with jQuery Ajax
I used :
<script>
$(function () {
$("#Data").load('http://www.somedomain.com/index.html#SomeText');
});
</script>
but webpage text doesn't load and in Google Chrome's JavaScript console I got this error:
XMLHttpRequest cannot load http://www.somedomain.com/index.html. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:9154' is therefore not allowed access.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30646308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Jquery event being called multiple times So I have a form to submit photos (to a total of 8), and I'm trying to apply a small effect: once you choose a photo, the button hides and the file name is displayed along with a 'X' to remove its selection.
However, when I add multiple photos and try to remove one, the event gets called multiple times, and the more I click, more multiple events are fired, all from the same element.
Can anyone figure it out?
var Upload = {
init: function ( config ) {
this.config = config;
this.bindEvents();
this.counter = 1;
},
/**
* Binds all events triggered by the user.
*/
bindEvents: function () {
this.config.photoContainer.children('li').children('input[name=images]').off();
this.config.photoContainer.children('li').children('input[name=images]').on("change", this.photoAdded);
this.config.photoContainer.children('li').children('p').children('a.removePhoto').on('click', this.removePhoto);
},
/**
* Called when a new photo is selected in the input.
*/
photoAdded: function ( evt ) {
var self = Upload,
file = this.files[0];
$(this).hide();
$(this).parent().append('<p class="photo" style="background-color: gray; color: white;">' + file.name + ' <a class="removePhoto" style="color: red;" href="#">X</a></p>');
if(self.counter < 8) { // Adds another button if needed.
Upload.config.photoContainer.append( '<li><input type="file" name="images"></li>');
self.counter++;
}
Upload.bindEvents();
},
/**
* Removes the <li> from the list.
*/
removePhoto: function ( evt ) {
var self = Upload;
evt.preventDefault();
$(this).off();
$(this).parent().parent().remove();
if(self.counter == 8) { // Adds a new input, if necessary.
Upload.config.photoContainer.append( '<li><input type="file" name="images"></li>');
}
self.counter--;
Upload.bindEvents();
}
}
Upload.init({
photoContainer: $('ul#photo-upload')
});
A: You are doing a lot of: Upload.bindEvents();
You need to unbind events for those 'li's before you bind them again. Otherwise, you add more click events. That's why you are seeing more and more clicks being fired.
A: From what I see, you are trying to attach/remove event handlers based on what the user selects. This is inefficient and prone to errors.
In your case, you are calling Upload.bindEvents() each time a photo is added, without cleaning all the previous handlers. You could probably debug until you don't leak event listeners anymore, but it's not worth it.
jQuery.on is very powerful and allows you to attach handlers to elements that are not yet in the DOM. You should be able to do something like this:
init: function ( config ) {
this.config = config;
this.counter = 1;
this.config.photoContainer.on('change', 'li > input[name=images]', this.photoAdded);
this.config.photoContainer.on('click', 'li > p > a.removePhoto', this.removePhoto);
},
You attach just one handler to photoContainer, which will catch all events bubbling up from the children, regardless of when they were added. If you want to disable the handler on one of the elements, you just need to remove the removePhoto class (so that it doesn't match the filter).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/11868960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Axios and Webpack I'm really new to Webpack and I'm using axios with a React project. I installed axios via npm and then I'm importing it like so when I want to use it:
import axios from 'axios/dist/axios.min.js';
Webpack takes care of the rest. Is this the "right" way to do it?
A: I think the standard way of doing this is as follows:
import axios from 'axios';
The UMD build (axios.min.js) can be helpful when you need to include axios in a <script> tag:
<script src="https://npmcdn.com/axios/dist/axios.min.js"></script>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/38857358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: PDF creation with the capability to use filter on different PDF files I need your help with an idea came to my mind.
I have bunch of PDF files which are tests in my colleage. I want to create a new PDF file which contains one exercise from each PDF (and if possible by using a filter word to determine which exericse to choose).
The issue is that I want this process to be automated. That is, by a click of a button, bunch of exercies from different PDF files will be filtered into a new PDF file and will be saved somewhere in the disk.
Following are the questions about the idea:
*
*Is there any freeware available to do this?
*If I want to implement it by myself how can I create a database which allows me to save and extract PDF content with its original look and feel (margins and so on).
Thank you very much!
A: You can use iTextSharp or PdfSharp to implement a solution, assuming each exercise starts on a new page.
Loop through the document's pages and search the current page for the word 'Exercise'. If the word is found, create a new empty document, extract the page from the source file and insert it in the new document. Search the next page, if the 'Exercise' word is found, save the previous document and create a new one. If the word is not found, extract the page and insert in the document you already created.
In this way you can implement any filtering you want.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/17609880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: BMP file formats and edge detection I came across this excellent tutorial on image processing by Bill Green - http://dasl.mem.drexel.edu/alumni/bGreen/www.pages.drexel.edu/_weg22/edge.html
He works with BMP formats in the tutorial since they are the simplest. I tried the sobel edge detection code, got it to compile and run. When I try this on the images on that web site (for example, LIAG.bmp, the photo of the lady), the code works just fine. However, when I get other .bmp images (for example, take any image and convert it at - http://www.online-convert.com/result/6c0ce763b5e6cadf3a76a966acdb9505) and the code spits out an image that can't be read by any image editor. The issue is most probably in the line -
nColors = (int)getImageInfo(bmpInput, 46, 4);
of his code. There seems to be some hard coding here which only works on the image sizes on his tutorial. The nColors variable is 256 for all images on his site, but 0 for all images I get otherwise. Can any one tell me how I might change this piece of code to generalize this?
A: The 46 in this line:
nColors = (int)getImageInfo(bmpInput, 46, 4);
...refers to the bit offset into the header of the BMP. Unless you are creating BMPs that do not use this file structure it should theoretically work. He is referring to 8-bit images on that page. Perhaps, 16 or 32-bit images use a different file structure for the header.
Read this Wikipedia page for more info: https://en.wikipedia.org/wiki/BMP_file_format#File_structure
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/18808554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to send a session scoped attribute through jquery ajax call I have an issue I have this jQuery code:
$(document).ready(function(){
$("#follow").click(function(){
$.ajax({
type: 'POST',
url:'/tweety-0.0.1-SNAPSHOT/twitter/tiles/follow',
data:{
searchedUser: $('#searchedUser').val()
}
})
})
})
this piece of code get hidden id and send it's value to the following method in my controller:
@RequestMapping(value="/follow",method=RequestMethod.POST)
public @ResponseBody void followUser(@RequestParam("searchedUser") String userToFollow,
@ModelAttribute("user") User user) {
if(userToFollow.equals(user.getUsername())){
// do nothing
}else{
service.followUser(userToFollow,user.getUsername());
}
}
I want to send a session scoped attribute through the previous ajax call. Any clue on how doing that??
A: Your best bet is to have that attribute's value in a hidden input field somewhere on the page, so you can then read it in with jQuery.
Unforunately, to the best of my knowledge jQuery or javascript does not have access to request, session or application scope variables.
So, if you do something like this:
<input type='hidden' name='${sessionVarName}' value='${sessionVarValue}' id='sessionVar'/>
You can access it after the page loads like this:
$(function(){
var sessionVar = $('#sessionVar').val();
alert(sessionVar);
});
This is the solution I've used when needing to get tomcat session vars into javascript, the method should work for you too.
Hope this helps
A: I suggest u to send a AttributeName which will successfully make u to get the sessionScope Attribute,that will be simple.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/6908923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: TypeError: __init__() missing 1 required positional argument: 'model' Beginner question. I have this class that basically relates a post to the user:
class Post(Model):
timestamp = DateTimeField(default=datetime.datetime.now)
user = ForeignKeyField(
rel_model=User,
related_name='posts'
)
content = TextField()
class Meta:
database = DATABASE
order_by = ('-timestamp',)
I get this error when it hits the 'related_name='posts' line:
Traceback (most recent call last):
File "app.py", line 5, in <module>
import forms
File "/dev/forms.py", line 2, in <module>
from models import User
File "/dev/models.py", line 41, in <module>
class Post(Model):
File "/dev/models.py", line 45, in Post
related_name='posts'
TypeError: __init__() missing 1 required positional argument: 'model'
The database I'm using is Sqlite (with Peewee). I don't understand why it's asking for a positional argument 'model', when Model is a parent class. What am I missing?
A: If you're using Peewee 3.x, then:
class Post(Model):
timestamp = DateTimeField(default=datetime.datetime.now)
user = ForeignKeyField(
model=User,
backref='posts')
content = TextField()
class Meta:
database = DATABASE
Note: Meta.order_by is not supported in Peewee 3.x.
A: model=model.DO_NOTHING
try this hope this work
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/51224092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Visual Studio ASPX designer doesn't render css correctly Am I the only person experiencing this? Or am I missing some simple fix?
Web Forms design would not show my page correctly. It basically doesn't try to apply bootstrap classes on my controls. The result is a really awkward-looking layout that would only show correctly when I open it in a browser. Now that can be a real showstopper considering how quickly we want to see how small code changes affect the output and then refine them further. If we have to go to browser after every small change, doesn't it make life miserable for ASP.NET designers?
I have seen this problem in VS2005, VS208, VS2010 and now in VS2013. Isn't it enough time for MS to fix this so very evident issue? Or is there a fix already available?
Update
Google search brought me to my own post after 2 years. The problem still persists with VS2015 Update 3. What a shame.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/26010302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Load Script Tag in Angular 2 App When Src Attribute is from Web API Call Context:
I have an Angular 2+ application that makes calls to a web API containing URLs for a src attribute on a script tag that is created by a loadScript function in the AfterViewInit lifecycle hook.
The web API returns a JsonResult and is yielding the data I expect. I was able to interpolate some of the data in the component's template.
Additionally, before I added the call to the web API, the loadScript function was working with a hard-coded argument.
Reading a thread on github. A "member" stated that scripts are not supposed to be loaded on demand. So what I implemented with the loadScript function is essentially a work around, but how else would load them? I don't want to have a seemingly endless amount of script tags sitting in the index.html file.
import { Component, OnInit, AfterViewInit } from '@angular/core';
import { ActivatedRoute } from '@angular/router';
import { Http } from '@angular/http';
@Component({
selector: 'app-agriculture-roadmap',
templateUrl: './agriculture-roadmap.component.html',
styleUrls: ['./agriculture-roadmap.component.css']
})
export class RoadmapComponent implements OnInit, AfterViewInit {
constructor(private _httpService: Http, private _route: ActivatedRoute)
{
}
apiRoadmaps: { roadmapName: string, pdfRoadmapURL: string, jsRoadmapURL: string };
ngOnInit() {
this._httpService
.get('/api/roadmaps/' + this._route.params)
.subscribe(values => {
this.apiRoadmaps = values.json() as { roadmapName: string, pdfRoadmapURL: string, jsRoadmapURL: string };
});
}
async ngAfterViewInit() {
await this.loadScript(this.apiRoadmaps.jsRoadmapURL);
}
private loadScript(scriptUrl: string) {
return new Promise((resolve, reject) => {
const scriptElement = document.createElement('script')
scriptElement.src = scriptUrl
scriptElement.onload = resolve
document.body.appendChild(scriptElement)
})
}
}
A: If you are using angular cli .
Then place these scripts in
angular-cli.json file under scripts array
scripts:[
.....
]
Please refer this [link] (https://rahulrsingh09.github.io/AngularConcepts/faq)
It has a question on how to refer third party js or scripts in Angular with or without typings.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/45516022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: humanize in django/python, how to translate i only need a quick way to translate days in italian in django.
Using humanize it works properly but obviously always in english
<h1 class="">{{ h.datainserimento|date:"d M Y" }} | {{ h.nomegiorno }}</h1>
def nomegiorno(self):
pio = self.datainserimento.strftime("%A")
return pio
to:
<h1 class="">21 May 2014 | Wednesday</h1>
thanks!
A: Try to use hard-coded, for a test, translation.activate(language) just before sending it:
def nomegiorno(self):
old = translation.get_language()
print('old', old)
translation.activate('it')
pio = self.datainserimento.strftime("%A")
translation.activate(old)
return pio
And tell me what you have, this should help anyway
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/23559804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Ensuring sensitive code is run only server-side with NextJS, where can such code be ran from? I'm learning NextJS and I'm trying to determine how to layout my project with a clean architecture that's also secure. However, I'm not sure about where to store code that contains potentially sensitive data (ie. connections to databases, accessing the file system, etc.). I've read the docs, but I'm still unsure about this one issue.
In my project layout, I have 2 directories that that relate to this problem: a top level /lib I added and the /pages/api directory that comes baked into every NextJS project.
To my understanding /pages/api NEVER sees the client-side and is hence safe for sensitive code. It should only be used as somewhere to do post, patch, delete, etc. operations. An example of where /pages/api is used would be when you make a post request to the server from a form. You can call an api from this route from ANYWHERE, for example: a form component, the /lib folder, a page in /pages, an external 3rd party api - wherever.
On the other hand, the top level /lib directory, is a place for boilerplate code, carrying out tedious operations such as sorting blog posts into alphabetical order, doing math computations, etc. that's not necessarily "secret" or sensitive - just long and annoying code. The /lib directory will ALWAYS be seen by the client-side - even if it's code that's only called by a server-side method such as getStaticProps().
In short, anything remotely sensitive should always be made as a post, patch, put etc. request to the /pages/api directory, and any long/tedious code that's not sensitive should be refactored to the /lib directory.
Do I have this all right?
A: You can do you sensitive stuff in api routes, getServerSideProps, getStaticProps. None of your code in /lib will be seen by the client unless your page actually imports code from there.
Since you were talking about db connections, it's very unlikely you'd be able to connect to your db from the browser by accident. Almost none of the libraries used to connect to db won't work from the browser and you also can only access env variables that start with NEXT_PUBLIC_ on the client.
You also need to keep in mind that every file under /api will be an api route, so you should put your helper files inside /lib instead of /api. If you put them under the /api that could lead to security vulnerabilities since anyone can trigger the default exported function of the files under /api.
If you for some reason need to be absolutely certain that some code isn't bundled to the files that clients will load even if you by accidentally to import it, it can be done with custom webpack config. Note that I'd only look into this option if the code in itself is very sensitive. As in that someone being able to read the code would lead to consequences. Not talking about code doing db queries or anything like that, even if you imported them by accident to client bundles, it wouldn't pose any threat as the client cannot connect to your database.
A: The /pages/api and lib should be safe enough. These files are not exposed by Next.js.
Next.js exposes the files in your public folder.
What you have said about the lib is correct. It is just a folder that can be used to house helper functions that you can reuse within your code.
getStaticProps only runs on the server-side. It will never run on the client-side. It wonβt even be included in the JS bundle for the browser. That means you can write code such as direct database queries without them being sent to browser.
You can safely make your calls with this function.
There is a tool you can use to validate that code in getStaticProps only runs serverside and never gets exposed client side.
Link to tool: https://next-code-elimination.vercel.app/
A: I've used /lib in much the same way you intend on a number of Next projects and haven't had any problems. As others have mentioned, if you are server-side generating everything with getStaticProps, you should be fine.
One thing I've run into is client and server side getting out of sync between the actual client and server (especially with iFrame or data that gets manipulated after a fetch). That doesn't cause security issues but it is something to think through architecturally. Next exposes its router if you need to sync effects to URL changes.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72119806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: React-router URL's, Routing node/express & file structure File structure
.root
|-client
| -build
| -node_modules
| -public
| -src
| -components
| -resources
| -data
| -images
| -templates
|-server
| -models
| -public
| -routes
| -server.js
server.js
//Required NPM Packages
const express = require('express'),
app = express(),
cors = require('cors'),
bodyParser = require('body-parser'),
mongoose = require('mongoose'),
methodOverride = require('method-override');
//MongoDB models.
const Product = require('./models/Product');
//Routes.
const indexRoute = require('./routes/index');
const demoRoute = require('./routes/demo');
const bootstrapTemplateRoute = require('./routes/bootstrapTemplate');
//Port.
const _PORT = process.env.PORT || 5000;
//Connect to mongoDB.
mongoose.connect({pathToMongoDB}, {useNewUrlParser:true, useUnifiedTopology:true});
//Setup body-parser.
app.use(bodyParser.urlencoded({extended:true}));
//Allow express/node to accept Cross-origin resource sharing.
app.use(cors());
//Set view engine to EJS.
app.set('view engine', 'ejs');
//Change views to specified directory
app.set('views', path.join(__dirname, '..', 'client','src','Resources','templates'));
app.use(express.static(__dirname+"/public"))
app.use(express.static("client/build"));
app.use(express.static("client/Resources/templates"));
//Setup method override.
app.use(methodOverride("_method"));
//register routes to express.
app.use('/', indexRoute);
app.use('/demo', demoRoute);
app.use('/bootstrapTemplate', bootstrapTemplateRoute)
//listen to established port.
app.listen(_PORT, () => {
console.log(`The server has started on port ${_PORT}!`);
});
module.exports = app;
Question
When I click the back button or load the page in my browser bar nodeJS states that it cannot get the page. I recognise that the front-end and back-end request are different as the React-Router takes care of routing via it's own javaScript allowing for zero page reloads, but how do you actually solve the problem on nodeJS/Express?
Also when I go to localhost:3000/demo it returns the data from my mongoDB and renders it in json format rather then loading the correct page.
Currently working on a MERN Stack with the below Nginx basic routing config.
http{
server{
listen 3000;
root pathToRoot/client/build;
location / {
proxy_pass http://localhost:5000/;
}
}
}
events {
}
I believe the problem is within my routing via express/nodejs. I can't create express specific routes and render the correct page because react is rendering it for me. I looked at the following question React-router urls don't work when refreshing or writing manually. Should I just do a catch-all and re-route back to the main index page? This seems like it would render bookmarks unusable.
Edit
Here are the 2 node routes.
index.js
const express = require('express'),
router = express.Router();
router.get("/", (req,res) => {
//Render index page.
res.render("index");
});
demo.js
const express = require('express'),
router = express.Router();
const Product = require('../models/Product');
router.get('/:searchInput', (req,res) => {
Product.find({ $text: {$search: req.params.searchInput}}, (err,foundItem) => {
if(err){
console.log(err);
}else{
res.send(foundItem);
}
})
})
router.get('/', (req, res) => {
Product.find({}, (err, foundItem) => {
res.send(foundItem);
});
});
module.exports = router;
A: I would try separate out your development folders from your build folders as it can get a bit messy once you start running react build. A structure I use is:
api build frontend run_build.sh
The api folder contains my development for express server, the frontend contains my development for react and the build is created from the run_build.sh script, it looks something like this.
#!/bin/bash
rm -rf build/
# Build the front end
cd frontend
npm run build
# Copy the API files
cd ..
rsync -av --progress api/ build/ --exclude node_modules
# Copy the front end build code
cp -a frontend/build/. build/client/
# Install dependencies
cd build
npm install
# Start the server
npm start
Now in your build directory you should have subfolder client which contains the built version of your react code without any clutter. To tell express to use certain routes for react, in the express server.js file add the following.
NOTE add your express api routes first before adding the react routes or it will not work.
// API Routing files - Located in the routes directory //
var indexRouter = require('./routes/index')
var usersRouter = require('./routes/users');
var oAuthRouter = require('./routes/oauth');
var loginVerification = require('./routes/login_verification')
// API Routes //
app.use('/',indexRouter);
app.use('/users', usersRouter);
app.use('/oauth',oAuthRouter);
app.use('/login_verification',loginVerification);
// React routes - Located in the client directory //
app.use(express.static(path.join(__dirname, 'client'))); // Serve react files
app.use('/login',express.static(path.join(__dirname, 'client')))
app.use('/welcome',express.static(path.join(__dirname, 'client')))
The react App function in the App.js component file will look like the following defining the routes you have just added told express to use for react.
function App() {
return (
<Router>
<div className="App">
<Switch>
<Route exact path="/login" render={(props)=><Login/>}/>
<Route exact path="/welcome" render={(props)=><Welcome/>}/>
</Switch>
</div>
</Router>
);
}
Under the components folder there are components for login and welcome. Now navigating to http://MyWebsite/login will prompt express to use the react routes, while for example navigating to http://MyWebsite/login_verification will use the express API routes.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/65418460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Difference: this vs Myclass.class vs MyClass.getClass() in synchronisation I'm trying to understand the exact differences in synchronisation using:
*
*synchronized(MyClass.class){...}
*synchronized(myClassInstance.getClass()){...} [edited as MyClass.getClass() doesn't even compile]
*synchronized(this){...}
Thanks to other posts I get that (1) is used to make sure that there is exactly one thread in the block and that (3) ensures that there is exactly one thread per instance.
(see Java Synchronized Block for .class )
But what does (2) do? Is it identical to (3)?
A: You mention you already understand options #1 and #3 so I'll focus on option #2.
As I state in the question comments, option #2 doesn't compile as written. However, I believe your intent is to obtain the class in an instance fashion rather than a static fashion (MyClass.class).
public class MyClass {
public void foo() {
synchronized (MyClass.class) {
}
}
public void bar() {
synchronized (getClass()) {
}
}
}
In the above code both MyClass.class and getClass() return the same object which means they are "equivalent". However, you have to be careful here.
public class MySubClass extends MyClass {
// inherits methods...
}
Now the two methods (foo and bar) are not equivalent. The method foo still uses the Class of MyClass to synchronize but bar now uses the Class of MySubClass (i.e. they are no longer synchronizing on the same object).
A: Point 1 will take the lock on the Class Object and only one object can exists (if the same class is not loaded by different classloaders ) in the JVM. This can be used with Static as well as noon static methods
Second option will not compile.
Third Option will take the locks on the current object. Third option can be used with instance method as this can be used in case of static methods.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/51839363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Dynamic dispatch in C++ - better syntax I want to have a dynamic_call functionality in C++. It should trigger overload resolution and call the most specific function on the target class (Visitor) depending on the dynamic type of the argument. It should replace the visitor pattern and should work like the dynamic keyword in C#.
I pasted what I got so far below. I want to not have to declare the generic lambda on caller side but on the implementation of dynamic call to make it easier to use. Is this possible?
#include <iostream>
struct Base { virtual ~Base() = default; };
class A : public Base {};
class B : public Base {};
template <class... Ts>
class dynamic_call {
public:
template <class F, class Arg>
static void call(F& func, Arg& a) {
call_impl<F, Arg, Ts...>(func, a);
}
private:
template <class F, class Arg>
static void call_impl(F& /*func*/, Arg& /*a*/) {
//end of recursion => nothing more to be done
}
template <class F, class Arg, class T, class... R>
static void call_impl(F& func, Arg& a) {
T* t = dynamic_cast<T*>(&a);
if(t) {
func(*t);
}
call_impl<F, Arg, R...>(func, a);
}
};
using namespace std;
struct Visitor {
void Visit(A&) { cout << "visited for a" << endl; }
void Visit(B&) { cout << "visited for b" << endl; }
};
int main(int /*argc*/, char */*argv*/[])
{
Visitor v;
auto func = [&v](auto& a) { v.Visit(a); };
A a;
dynamic_call<A, B>::call(func, a);
B b;
dynamic_call<A, B>::call(func, b);
{
Base& base(a);
dynamic_call<A, B>::call(func, base);
}
{
Base& base(b);
dynamic_call<A, B>::call(func, base);
}
return 0;
}
I want to call it like this without the need to add the generic lambda.
dynamic_call<A,B>::call(v, a);
A: Here are some ideas, I am not sure what your requirements are, so they might not fit:
*
*Change Visit into operator(). Then the call syntax reduces to dynamic_call<A,B>::call(v, a); as you required. Of course that is only possible if the interface of the visitor may be changed.
*Change func(*t) in call_impl to func.Visit(*t). Then again the caller can use dynamic_call<A,B>::call(v, a); and no change to the interface of the visitor is necessary. However every visitor used with dynamic_call now needs to define Visit as visitor method. I think operator() is cleaner and follows the usual patterns, e.g. for Predicates in the standard library more.
I don't particularly like either of these because the caller always has to know the possible overloads available in the visitor and has to remember using dynamic_call. So I propose to solve everything in the visitor struct:
*struct Visitor {
void Visit(A&) {
cout << "visited for a" << endl;
}
void Visit(B&) {
cout << "visited for b" << endl;
}
void Visit(Base& x) {
dynamic_call<A,B>::call([this](auto& x){Visit(x);}, x);
}
};
This can be called with v.Visit(a), v.Visit(b) and v.Visit(base). This way the user of Visitor does not need to know anything about the varying behavior for different derived classes.
*If you do not want to modify Visitor, then you can just add the overload via inheritance:
struct DynamicVisitor : Visitor {
void Visit(Base& x) {
dynamic_call<A,B>::call([this](auto& x){Visit(x);}, x);
}
};
The points can be combined, for example into:
struct Visitor {
void operator()(A&) {
cout << "visited for a" << endl;
}
void operator()(B&) {
cout << "visited for b" << endl;
}
void operator()(Base& x) {
dynamic_call<A,B>::call(*this, x);
}
};
Used as v(a), v(b) and v(base).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/39945695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Check Cron Services in PHP how could I know if cron services is running using PHP script?
I want to have a PHP script that checks if cron services is running, else it will notify the admin via email so that they can make an immediate action on it.
Thank you
A: Depending on your OS, there are three approaches (all of which add considerable performance losses but might be acceptable for your app)
*
*Check process list - You can execute a console command to check the list of runniing processes. I dont think this is possible on windows but no problem on linux. Take EXTRA care to filter any and all variables used there as running console command can be a big security risk.
*running files - Create a file on start of your script and check for it's existance. I think this is how most (even non PHP) processes check if they are running. Performance loss and security issues should be minimal, you have to take care that the file is properly removed though, even in case ofa an error.
*info in storage - Like the file solution, you can add information into your database or other storage system. Performance loss is slightly bigger than file IO but if you already have a db connection for your script, it might be worth it. It's also easier to store more informatione for your current process there or add logging to it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/15740410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Embed Google URL Shortener stats Google URL Shortener (goo.gl) provide statistics on clicked links. How to embed these statistics ?
*
*Components seems hardcoded in stat's page
*IFrame targeting stat's page seems to be blank
*API provide JSOn data but no sample to display charts
Any best practices ?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/19072162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Node.js - selecting random file and reading its content then spliting it So, i want to select random file, read it's content, split it and stringify it.
But problem is the file always is the same. (i refreshed the page like 10 times)
Code:
//require
var fs = require('fs');
var http = require('http')
//require
var files = fs.readdirSync('./pathtofiles');
function randomfile(list){
return list[Math.floor(Math.random() * files.length)];
}
var location = './zdj' + '/' + (randomfile(files))
var data = fs.readFileSync(location, "utf8");
var splittext = data.split("||")
var app = http.createServer(function(req,res){
res.setHeader('Content-Type', 'application/json');
res.end(JSON.stringify({"test1": splittext[0], "test2": splittext[1], "test3": splittext[2]}));
});
app.listen(3000);
File(s) content example: The||example||file
A: The "random" file is determined before the server is started. In order to do this for every request you need to call randomfile(...) in the request-callback:
const app = http.createServer( (req,res) => {
const location = './zdj' + '/' + (randomfile(files))
const data = fs.readFileSync(location, "utf8");
const splittext = data.split("||")
res.setHeader('Content-Type', 'application/json');
res.end(JSON.stringify({"test1": splittext[0], "test2": splittext[1], "test3": splittext[2]}));
});
Note:
1.) you should use let/const instead of var nowadays (see this for more info)
2.) readFileSync blocks, you should consider using a non-blocking read-operation, e.g. fs.promises.readFile()
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/63563668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to identify if only a certain amount of numbers in an array are consecutive I've been making a poker simulator and I've managed to make a function which can identify an array which has consecutive numbers.
def straightCheck(playerHand):
playerHand.sort()
print(playerHand)
for i in range(len(playerHand)-1):
if playerHand[i] != playerHand [i+1] - 1:
return False
print(handstrength)
return True
print(handstrength)
The only problem is that I want the function to identify 5 consecutive numbers in the array when the length of the array is greater than 5. For example: I want the array [1,2,3,4,5,6,7] to return True but i also want the array [1,3,4,5,6,7,9] to return True.
A: You're returning False too soon. Instead, you could keep a running tally of the amount of consecutive numbers you've seen so far, and reset it when you come across a number that breaks the streak.
def straightCheck(playerHand):
playerHand.sort()
tally = 1
for i in range(len(playerHand)-1):
if playerHand[i] != playerHand [i+1] - 1:
tally = 0
tally += 1
if tally >= 5:
return True
return False
A: Right now you check if numbers are not consecutive and then return false. I think you could better check if numbers are consecutive and if so raise a counter and if not then reset it. That way you know how many numbers are consecutive. If that is 5 or higher you should return True.
A: If you have a working function, you could just process all the 5-card sets in a loop:
for i in range(len(values) - 4):
is_straight = straightCheck(values[i:i+5])
if is_straight:
break
print(is_straight)
A: def straightCheck(playerHand):
playerHand.sort()
print(playerHand)
count = 0;
for i in range(len(playerHand)-1):
if playerHand[i] == playerHand [i+1] - 1:
count += 1
if count >= 5:
return True
else:
count = 0
return False
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/33203500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: how do I use pcov in python to get errors for each parameter? I have a code and used curve_fit to fir both Lorentz and gaussin curves to the data.
I need to get error estimates for each parameter outputted, so have printed both the popt and pcov
I know the scipy reference guide states how to use pcov matrix to find the errors however this is unclear to me as I am a novice at programming.
thanks
A: Reading the docs can often provide answers to such questions. For example, the documentation for curve_fit() at https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html says:
Returns: popt : array
Optimal values for the parameters so that the sum of the squared residuals of f(xdata, *popt) - ydata is minimized
pcov : 2d array
The estimated covariance of popt. The diagonals provide the variance of the parameter estimate. To compute one standard deviation errors on the parameters use perr = np.sqrt(np.diag(pcov)).
How the sigma parameter affects the estimated covariance depends on absolute_sigma argument, as described above.
which is to say: use p_sigma = np.sqrt(np.diag(pcov))
Allow me to suggest that for curve-fitting to Gaussian and Lorentzian models, you might find lmfit (https://lmfit.github.io/lmfit-py/) helpful. It provides built-in version for these and other Models. Among other features, it can print a nicely formatted report for such a fit that includes uncertainties.
For an example, see https://lmfit.github.io/lmfit-py/builtin_models.html#example-1-fit-peaked-data-to-gaussian-lorentzian-and-voigt-profiles
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43561036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Ubuntu 14.04 wild card domains not working after uninstall of Samba I am new to Ubuntu, I just switched to 14.04 a week ago. I installed LAMPP and used this http://ubuntuforums.org/showthread.php?t=1719832 and this http://brunodbo.ca/blog/2013/04/26/setting-up-wildcard-apache-virtual-host-wildcard-dns to set up a wildcard domain and virtual hosts to enable me test WordPress multi-site and everything worked fine. My site was running on http://mysite.loc. I tried to install Samba to share files with my colleagues on Windows. And it also worked fine for sometime. I tried using two tools with GUI(s) to configure Samba. I tried this https://apps.ubuntu.com/cat/applications/gadmin-samba/ and this https://apps.ubuntu.com/cat/applications/system-config-samba/, I noticed that the two applications were colliding when reading the smb.conf file. I was not able to start the Samba service and its daemon so I decided to unistall the applications along with Samba. So I used the following commands.
sudo apt-get auto-remove samba
sudo apt-get purge samba
sudo apt-get purge winbind
sudo apt-get install winbind
now I cannot get my wildcard domains working. I can access localhost and 127.0.0.1 but I cannot access mysite.loc or *.loc which runs on 127.0.0.1 but on virtual hosts
This is my /etc/samba/smb.conf file
[global]
realm =
netbios name = Samba24
server string = Samba file and print server
workgroup = WORKGROUP
security = ads
hosts allow = 127. 192.168.0.
interfaces = 127.0.0.1/8 192.168.0.0/24
bind interfaces only = yes
remote announce = 192.168.0.255
remote browse sync = 192.168.0.255
printcap name = cups
load printers = yes
cups options = raw
printing = cups
guest account = smbguest
log file = /var/log/samba/samba.log
max log size = 1000
null passwords = no
username level = 6
password level = 6
encrypt passwords = yes
unix password sync = yes
socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192
local master = yes
domain master = yes
preferred master = yes
domain logons = yes
os level = 80
logon drive = m:
logon home = \\%L\homes\%u
logon path = \\%L\profiles\%u
logon script = %G.bat
time server = yes
name resolve order = wins lmhosts bcast
wins support = yes
wins proxy = yes
dns proxy = no
preserve case = yes
short preserve case = yes
client use spnego = no
client signing = no
client schannel = no
server signing = no
server schannel = no
nt pipe support = yes
nt status support = yes
allow trusted domains = no
obey pam restrictions = yes
enable spoolss = yes
client plaintext auth = no
disable netbios = no
follow symlinks = no
update encrypted = yes
pam password change = no
passwd chat timeout = 120
hostname lookups = no
username map = /etc/samba/smbusers
passdb backend = tdbsam
passwd program = /usr/bin/passwd '%u'
passwd chat = *New*password* %n\n *ReType*new*password* %n\n *passwd*changed*\n
add user script = /usr/sbin/useradd -d /dev/null -c 'Samba User Account' -s /dev/null '%u'
add user to group script = /usr/sbin/useradd -d /dev/null -c 'Samba User Account' -s /dev/null -g '%g' '%u'
add group script = /usr/sbin/groupadd '%g'
delete user script = /usr/sbin/userdel '%u'
delete user from group script = /usr/sbin/userdel '%u' '%g'
delete group script = /usr/sbin/groupdel '%g'
add machine script = /usr/sbin/useradd -d /dev/null -g sambamachines -c 'Samba Machine Account' -s /dev/null -M '%u'
machine password timeout = 120
idmap uid = 16777216-33554431
idmap gid = 16777216-33554431
template shell = /dev/null
winbind use default domain = yes
winbind separator = @
winbind cache time = 360
winbind trusted domains only = yes
winbind nested groups = no
winbind nss info = no
winbind refresh tickets = no
winbind offline logon = no
[homes]
comment = Home Directories
path = /home
valid users = %U
read only = no
available = yes
browseable = yes
writable = yes
guest ok = no
public = no
printable = no
locking = no
strict locking = no
[netlogon]
comment = Network Logon Service
path = /var/lib/samba/netlogon
read only = no
available = yes
browseable = yes
writable = no
guest ok = no
public = no
printable = no
locking = no
strict locking = no
[profiles]
comment = User Profiles
path = /var/lib/samba/profiles
read only = no
available = yes
browseable = yes
writable = yes
guest ok = no
public = no
printable = no
create mode = 0600
directory mask = 0700
locking = no
strict locking = no
[printers]
comment = All Printers
path = /var/spool/samba
browseable = yes
writable = no
guest ok = no
public = no
printable = yes
locking = no
strict locking = no
[pdf-documents]
path = /var/lib/samba/pdf-documents
comment = Converted PDF Documents
admin users = %U
available = yes
browseable = yes
writeable = yes
guest ok = yes
locking = no
strict locking = no
[pdf-printer]
path = /tmp
comment = PDF Printer Service
printable = yes
guest ok = yes
use client driver = yes
printing = bsd
print command = /usr/bin/gadmin-samba-pdf %s %u
lpq command =
lprm command =
A: It definitely sounds like you're only listening on localhost, so add your IP address from eth0 (or whichever interface you're using).
Your config line:
interfaces = 127.0.0.1/8 192.168.0.0/24
is incorrect. 127.0.0.1/8 is a way to express both the IP and the subnet. 192.168.0.0/24 is a subnet declaration. Change the 192.168.0.0/24 to whatever your actual IP address is (/sbin/ifconfig) and restart samba.
You may want to read the Networking Options with Samba section to familiarize yourself with the hosts allow and hosts deny options as well
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31358729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: Use DatePart() function on calculated field Access 2010 Okay so I would like to create a calculated field in access to automatically create an invoice number in the format that my company is already using.
The format is MM/YY-Group Number
I already have a field with the group number in it, and another field that has a date in the format of m/d/yyyy
I would like to pull the two digit month and the two digit year from the date field in the calculated field. I know I could use the DatePart function, but for whatever reason that function doesn't seem to work in a calculated field expression.
If anybody has any ideas of how I could accomplish this I would appreciate it.
A: Something like this should work:
Right("0" & Month([DateField]),2) & "/" & Right(Year([DateField]),2) & "-" & [GroupNumber]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/24088000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: webpack gl-matrix as an external I'm writing a library using gl-matrix as a dependency. I'm using webpack to output the src and want to exclude gl-matrix part from my code but list it as a dependency.
But turns out I can only packed the gl-matrix into the lib, or have error saying objects from gl-matrix like vec3 is undefined in my lib. Any ideas?
webpack.config.js
module.exports = {
//...
output: {
filename: 'minimal-gltf-loader.js',
path: path.resolve(__dirname, 'build'),
library: 'MinimalGLTFLoader',
libraryTarget: 'umd'
},
externals: {
glMatrix: {
commonjs: 'gl-matrix',
commonjs2: 'gl-matrix',
amd: 'gl-matrix'
}
}
}
minimal-gltf-loader.js (my lib)
import {vec3, vec4, quat, mat4} from 'gl-matrix';
//...
export { MinimalGLTFLoader };
the app
import {vec3, vec4, quat, mat4} from 'gl-matrix';
var mgl = require('../build/minimal-gltf-loader.js');
A: externals: {
'gl-matrix': {
commonjs: 'gl-matrix',
commonjs2: 'gl-matrix',
amd: 'gl-matrix'
}
}
external dict name should match name of the lib
A: externals: [
{
'gl-matrix': {
root: 'window',
commonjs: 'gl-matrix',
commonjs2: 'gl-matrix',
amd: 'gl-matrix'
}
}
]
If you import gl-matrix via script tag, it will be several global variables like vec3,mat4, but not gl-matrix.vec3,gl-matrix.mat4.
So you can set them to one variable and then use this variable as webpack external root config.
Then I found that window object has window attr that points to it self, so use 'window' in root field is a better choice and it's doesn't need to declare a united variable anymore.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/46355289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: video.js unable to read local videos I've a little video.js player and i'm new to web devlopment.
I want it to read my local videos but i've a message:
"Sorry no compatible source and playback technology were found for this video."
But: If i use the external link given by video.js as sample, i can read the video.
I downloaded some samples to try out. An OGV one worked on the website i downloaded it. But when i try with my own, it doesn't work and show the same error.
Here is the .jsp code :
<div id="videoPlayer">
<video id="videoClip" class="video-js vjs-default-skin"
controls preload="auto" width="640" height="360"
poster="http://video-js.zencoder.com/oceans-clip.png"
data-setup='{"example_option":true}'>
<source src="/home/ogda/Bureau/war/src/main/resources/small.ogv type=video/ogg" />
</video>
</div>
<script type="text/javascript" language="javascript">
var videoPlayer = videojs("videoClip");
videoPlayer.on("pause", changeVideo);
//videoPlayer.src("/home/ogda/Bureau/war/src/main/resources/lol.mp4");
videoPlayer.play();
</script>
Thank you !
EDIT : ANSWER
Commenters were right.
The solution was siomply to put the videos on a webserver !
A: This is a sandbox problem. The browser does not allow loading local resources for security reasons. If you need it still, use a local webserver on your machine.
A: Most major browsers don't allow you to load in local files, since that poses a security error, e.g. people stealing secure files.
You need to test this on localhost, or setup some other server to test it.
Or, if you're using Chrome, you can turn on the flag that'll allow you to load in local files.
chrome --allow-file-access-from-files
run this from command line
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/18707594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: GTK3: gtk_menu_popup_at_pointer() without trigger_event I'm working on an otherwise non-GTK application (actually SDL2), where the right click into the window causes to open a GTK pop-up menu, so as you can see, here, GTK is just used for that purpose (and for some other, like file selection dialogue box, etc, but it's not the scope of my question).
I can happily use gtk_menu_popup(), and it works nicely. The only problem is the following warning during compilation:
warning: βgtk_menu_popupβ is deprecated: Use '(gtk_menu_popup_at_widget, gtk_menu_popup_at_pointer, gtk_menu_popup_at_rect)' instead
Which is fair enough, so I want to use gtk_menu_popup_at_pointer(), since this is what I want basically. However this is my problem. That it expects a trigger_event, which I don't have, since no GTK event triggered that, but my own code, which is SDL2 ... According to the documentation I can use NULL as the trigger_event, however if I do this, I get the following messages at stdout, and no popup is shown:
Gtk-WARNING **: 19:10:53.512: no trigger event for menu popup
Gtk-CRITICAL **: 19:10:53.512: gtk_menu_popup_at_rect: assertion 'GDK_IS_WINDOW (rect_window)' failed
My question: is it possible to solve my problem (using GTK popup from a non-GTK application) without using deprecated function (gtk_menu_popup - which is actually works, just it's ugly that I use a deprecated feature ...).
The popup menu is created with gtk_menu_new() it has no "parent" or containers, or anything, since it's the only thing I want to use from GTK at this part of my application.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/57911772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Scraping selective links from a webpage I have created a scraper using python which is parsing links from a webpage. It scrapes 10 links from that site. Is it possible if i want to parse the first 8 links or the last 8 links out of that 10 links? Actually, I can't get any idea how to do so. Any help would be highly appreciated.
import requests
from lxml import html
url = "http://www.supplyhouse.com/"
def Selectivelinks(address):
response = requests.get(address)
tree = html.fromstring(response.text)
titles = tree.xpath('//ul[@id="shop-by-category-list"]')
for title in titles:
links=title.xpath('.//a/@href')
for lnk in links:
print(lnk)
Selectivelinks(url)
A: If links returns a list. Then you can fetch the last 8 links using links[:-8]
Consider x contains a list of numbers from 1-10 then x[-8:] will return the last 8 items in the list
x = [i for i in range (0, 10)]
print x[-8:]
# [2, 3, 4, 5, 6, 7, 8, 9]
Also known as list slicing.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43742179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: When a mysql database value is null, JTable getValue is not working I have a JTable that gets values from mysql and I have a MouseClicked action that when I click a row, it takes the values and writes to the JTextFields. The problem is when mysql value is null, it gives NullPointerException and dosen't write anything to the JTextFields.
It works when they are not null and shows the whole information where I want it. But I want it to write them as a empty JTextFields without error when they are null.
This is my mouseclick action code that only works when selected row's value is not null:
private void calisanTablosuMouseClicked(java.awt.event.MouseEvent evt) {
int selectedrow = calisanTablosu.getSelectedRow();
alan_adSoyad.setText(calisanTablosu.getValueAt(selectedrow,1).toString());
}
Not very good at English so basic explanation or just the code will be good. Thanks.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/57937971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Add fields on runtime - Microsoft Fakes Framework Using Microsoft Fakes Framework, I am working on Unit Tests. But I encouter a problem. I need to fake a dbml DataContext. This are Linq to SQL classes.
What I need to do is faking method InsertOnSubmitT0() of the class System.Data.Linq.Table. I want to add the param object to be add to a local field instead of writing it to the database.
My question is: how can I add the local field Transports on runtime on the existing class Table? Something like this:
using (ShimsContext.Create())
{
var shimLinq = new ShimTable<Transport>()
{
//something like this:
//addField("transports", List<Transport>, false);
InsertOnSubmitT0 = (transport) =>
{
Transport t = (Transport)transport;
}
};
}
A: Generally speaking you can't add something to an existing type. (you can however create a subtype at runtime and add things to that)
In your case though I suggest just using a regular variable that you capture when you create the shim, you can then return that variable as part of your shim and read it later at your descresion
var thelist = new List<Transport>(); //fill out whatever test data you want here, in the case of TransportsGet
using (ShimsContext.Create())
{
var shimLinq = new ShimTable<Transport>()
{
InsertOnSubmitT0 = (transport) =>
{
Transport t = (Transport)transport;
thelist = t.Transports; // assign your outer variable, or do the asserts directly
}
};
}
// do assertions on thelist here
In the comments you mentioned shimming TransportsGet, you can do this the same way and just return thelist in that shim. Then you can do asserts on thelist variable at the end of the test.
However if you want to test a .Where statement, that wont show up on the actual list directly, you have to test it some other way. You could have a thelist with invalid banks and assert that the code doesn't return anything for example
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/19313375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Using GLEW in a static library I have a static library project and a standard windows executable project that uses the static library. In the static library, I initialise GLEW and use various gl* functions in various classes (including templates). In the executable, I initialise GLFW (which happens before initialising GLEW, as required by the library) and use functions and classes defined in the static library.
However, I am getting access violations when glfwPollEvents() is called which makes me wonder if it's valid to include and initialise GLEW in a static library project used in an executable project with GLFW included and initialised.
In addition, is there anything special about static libraries in terms of memory access between the lib and exe? Is it possible to use gl* functions in both the static library and the executable project which alter the same context?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/18187390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.