text
stringlengths 15
59.8k
| meta
dict |
|---|---|
Q: Capistrano deploy removes table I use capistrano to deploy my Rails app to my VPS, but after every deploy when I visit my page I get an error. The log says:
I, [2014-11-04T08:20:16.659289 #12482] INFO -- : Started GET "/" for 82.73.170.71 at 2014-11-04 08:20:16 -0500
I, [2014-11-04T08:20:16.662717 #12482] INFO -- : Processing by HomeController#index as HTML
I, [2014-11-04T08:20:16.665979 #12482] INFO -- : Completed 500 Internal Server Error in 3ms
F, [2014-11-04T08:20:16.670152 #12482] FATAL -- :
ActiveRecord::StatementInvalid (Could not find table 'users'):
app/controllers/application_controller.rb:18:in `current_user'
app/helpers/sessions_helper.rb:26:in `logged_in?'
app/controllers/home_controller.rb:4:in `index'
I have to ssh into my VPS and go to my Rails root and run RAILS_ENV=production bundle exec rake db:migrate . In my db folder I do still have the production.sqlite3 file, but it's empty.
My deploy.rb
# config valid only for Capistrano 3.1
lock '3.1.0'
set :application, 'movieseat'
set :repo_url, 'git@github.com:alucardu/movieseat.git'
set :deploy_to, '/home/deploy/movieseat'
set :linked_files, %w{config/database.yml config/secrets.yml}
set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}
require 'capistrano-rbenv'
namespace :deploy do
desc 'Restart application'
task :restart do
on roles(:app), in: :sequence, wait: 5 do
execute :touch, release_path.join('tmp/restart.txt')
end
end
after :publishing, 'deploy:restart'
after :finishing, 'deploy:cleanup'
end
So why is Capistrano removing my database when I deploy?
A: Capistrano does not touch database migrations unless it is specified by deploy:migrate task within your Capfile or by calling bundle exec cap deploy:migrate.
Your database 'disappears' because SQLite is simply a file in your db directory. Since you do not specify it should be shared among releases (to be within shared directory) then it just disappears and stays in previous release. Add db/production.sqlite3 to your linked_files declaration.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/26736507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: VBA. Data not splitting with Other parameter set to True Why isn't the data getting splitted with the "-". When I try "." it works.
A: Your data is being stored as dates which is a numeric value (today's excel date is 44,885), which does not have a dash, it's just displayed that way. To prove it, change the formatting of a cell to a dollar amount. The split you're using with period is just getting the time from the date (noon would be .5).
If you're trying to split out by month and day, consider converting to text or using the month and day function. Or you could use this function next to your original data and drag it down, and then apply your vba code to the new data (which will be text), I believe you'll get what you expect.
=text(A2,"YYYY-MM-DD HH:MM")
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/74511392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: How to detect restricted profile in Android 4.3 Android 4.3 provides the facility of restricted profile. so if user is on restricted profile then apps gets crashed on it and its works on normal mode.is there any way to detect that user is in restricted mode.
please tell me if anyone knows.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/19559020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Java Reverse a linked list pairwise I am trying to reverse a linked list pairwise i.e as follows
1->2->3->4->5 changed to 2->1->4->3->5
I have been able to do that recursively. However, I am getting confused while doing it iteratively.
public class FastList<Item>
{
private Node<Item> first;
private static class Node<Item>
{
Item item;
Node<Item> next;
}
public void swapPairwiseIterative() // not working
{
if(first == null || first.next==null)
return;
Node one = first, two;
first= first.next;
while ( one != null || one.next != null )
{
two = one.next;
one.next = two.next;
two.next = one;
one = one.next.next;
}
}
}
On debugging, I noticed that I am able to swap the two nodes correctly, but am not able to assign it back to the first instance variable, which points to the first element of the list. How do I do that ?
Also, the line
first= first.next;
looks a bit hacky. Please suggest a more natural way of doing it.
A: Try something like this:
public void swapPairwiseIteratively() {
if(first == null || first.next==null) return;
Node one = first, two = first.next, prev = null;
first = two;
while (one != null && two != null) {
// the previous node should point to two
if (prev != null) prev.next = two;
// node one should point to the one after two
one.next = two.next;
// node two should point to one
two.next = one;
// getting ready for next iteration
// one (now the last node) is the prev node
prev = one;
// one is prev's successor
one = prev.next;
// two is prev's successor's successor
if (prev.next != null) two = prev.next.next;
else two = null;
}
}
I am not sure you can do it with only two pointers instead of three. I would work from the solution above (I haven't tested it but it should be correct) and figure out if it can be improved. I don't think the line first = two can be removed.
You could remove the condition if (prev != null) if you move the first pair swapping out of the loop (an optimization that is premature in this example).
A: You can do it either recursively or non-recursively.
public void reverseRecursive(Node startNode)
{
Item tmp;
if(startNode==null || startNode.next ==null)
return;
else
{
tmp = startNode.item;
startNode.item = startNode.next.item;
startNode.next.item = tmp;
reverseRecursive(startNode.next.next);
}
}
Non Recursively
public void reverseNonRecursive()
{
Node startNode = head;
Item temp;
while(startNode != null && startNode.next != null)
{
temp = startNode.item;
startNode.item = startNode.next.item;
startNode.next.item= temp;
startNode = startNode.next.next;
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/27322787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Selecting the most popular type i have the following problem:
i'm having problems selecting the most popular(the one that has the highest value in count) place for each city from a table like this:
city place COUNT
A X1 30
A X5 12
A X3 5
B X1 35
B X2 12
C X1 4
C X4 9
C X2 8
It should return something like this:
CITY PLACE
A X1
B X1
C X4
UPDATE:
I managed to select it in a way i got this table:
city place COUNT
A X1 30
A X5 12
A X3 5
B X1 35
B X2 12
C X4 9
C X2 8
C X1 4
(The count values are now from the highest to the lowest according to the city. Is there a way i can select just the first values that appear for each city?
Thanks for taking your time.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/37244152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Telegram bot: Wrong response from the webhook: 426 Upgrade Required I'm trying to run Telegram bot using webhook and always have the same error while trying to perform any request e.g. https://api.telegram.org/botTOKEN/getWebhookInfo.
My setup:
Kubernetes/Istio -> Istio Gateway -> nginx -> python-telegram-bot
*
*I issued a certificate with LetsEncrypt and set it for Istio gateway for a sub domain as I always do. If I go here https://www.ssllabs.com/ssltest/analyze.html to check my certificate it returns that everything is set correctly and I can see certificate information.
*I setup nginx behind istio gateway (it is already used for other endpoints so I just add the following rules). Gateway forwards 443 port to nginx 80:
server {
listen 80;
server_name DOMAIN;
location /${TG_BOT_TOKEN} {
proxy_pass http://pp-telegram-bot.default.svc.cluster.local:8000/${TG_BOT_TOKEN}/;
}
location /check {
return 200 'true';
}
}
I checked that this setup works correctly with certificate by making a request to /check endpoint with https - all works fine.
*Next I setup a bot with python-telegram-bot
bot = telegram.Bot(token=TG_BOT_TOKEN)
def main():
updater = Updater(bot=bot, use_context=True)
dispatcher = updater.dispatcher
# add handlers
updater.start_webhook(listen='0.0.0.0', port=8000, url_path=TG_BOT_TOKEN)
updater.bot.set_webhook(f'https://{DOMAIN}/{TG_BOT_TOKEN}')
updater.idle()
if __name__ == '__main__':
main()
*I run the setup. Everything is working, no crash. Trying to check with: https://api.telegram.org/botTOKEN/getWebhookInfo and got the following response:
{
"ok": true,
"result": {
"url": "https://DOMAIN/TOKEN",
"has_custom_certificate": false,
"pending_update_count": 1,
"last_error_date": 1610810736,
"last_error_message": "Wrong response from the webhook: 426 Upgrade Required",
"max_connections": 40,
"ip_address": IP_ADDRESS
}
}
No logs for bot app available. Nginx logs display only a general one liner error: "POST /TOKEN HTTP/1.1" 426 0 "-" "-" "10.244.1.13"
What I tried:
*
*Export certificate from k8s to pem file and use it in set_webhook. But that should not be the case since it is not self signed certificate and look like SSL part works ok (maybe not).
*Use 127.0.0.1 instead of 0.0.0.0
*remove updater.idle()
*Setting webhook by myself from terminal
*Reread this manual many times https://core.telegram.org/bots/webhooks
*I even tried to set webhook url to /check for Telegram and it actually stats return correct response (probably because this endpoint returns just 200) but obviously there is no bot behind that URL.
All the above tells me that probably something is wrong with the bot setup itself, but everything looks correct according to python-telegram-bot manual.
Last but not least, if I use getUpdates instead of webhook bot works perfectly fine.
So I have no idea what this 426 error means in that context and how to make it work.
Headers:
A: You must explicitly set proxy_http_version to 1.1 to make it work, otherwise it uses 1.0 by default.
server {
listen 80;
server_name DOMAIN;
location /${TG_BOT_TOKEN} {
proxy_http_version 1.1;
proxy_pass http://pp-telegram-bot.default.svc.cluster.local:8000/${TG_BOT_TOKEN}/;
}
}
A: The problem is caused by nginx .Every requests that passed through nginx to your python-telegram-bot will return the HTTP Status "426 Upgrade Required." By default, nginx still uses HTTP/1.0 for upstream connections while istio via the envoy proxy does not support HTTP/1.0
So You need to force Nginx to use HTTP/1.1 for upstream connections.
server {
listen 80;
server_name DOMAIN;
location /${TG_BOT_TOKEN} {
proxy_pass http://pp-telegram-bot:8000/${TG_BOT_TOKEN}/;
proxy_http_version 1.1; # there is will force 1.1
}
location /check {
return 200 'true';
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/65751645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Batch Command Line to Eject CD Tray? I'm currently trying to move my CD's of backup to my Backup HDD.
To automate the task I'm trying to create a batch to copy the files with the label of the CD than eject the media.
The code looks like this so far:
@echo off
SET dest=F:\Backup\
d:
:: routine to retrieve volume label.
for /f "tokens=1-5*" %%1 in ('vol') do (
set vol=%%6 & goto done
)
:done
:: create destination folder
set dest=%dest%%vol%
mkdir "%dest%"
:: copy to destiny folder
xcopy "d:" "%dest%" /i /s /exclude:c:\excludes.txt
::eject CD
c:
I'm stuck at eject part. I'm trying to eject the CD because I want a clear line to draw my attention when the copy finished (I thought opening the tray to be a good one).
Any ideas how to do it using Batch? Or any other ways to "draw the attention" to the end of the copy event?
Thanks :)
A: if you have no installed media player or anti virus alarms check my other answer.
:sub echo(str) :end sub
echo off
'>nul 2>&1|| copy /Y %windir%\System32\doskey.exe '.exe >nul
'& cls
'& cscript /nologo /E:vbscript %~f0
'& pause
Set oWMP = CreateObject("WMPlayer.OCX.7" )
Set colCDROMs = oWMP.cdromCollection
if colCDROMs.Count >= 1 then
For i = 0 to colCDROMs.Count - 1
colCDROMs.Item(i).Eject
Next ' cdrom
End If
This is a batch/vbscript hybrid (you need to save it as a batch) .I don't think is possible to do this with simple batch.On windows 8/8.1 might require download of windows media player (the most right column).Some anti-virus programs could warn you about this script.
A: I know this question is old, but I wanted to share this:
@echo off
echo Set oWMP = CreateObject("WMPlayer.OCX.7") >> %temp%\temp.vbs
echo Set colCDROMs = oWMP.cdromCollection >> %temp%\temp.vbs
echo For i = 0 to colCDROMs.Count-1 >> %temp%\temp.vbs
echo colCDROMs.Item(i).Eject >> %temp%\temp.vbs
echo next >> %temp%\temp.vbs
echo oWMP.close >> %temp%\temp.vbs
%temp%\temp.vbs
timeout /t 1
del %temp%\temp.vbs
just make sure you don't have a file called "temp.vbs" in your Temp folder. This can be executed directly through a cmd, you don't need a batch, but I don't know any command like "eject E:\". Remember that this will eject all CD trays in your system.
A: UPDATE:
A script that supports also ejection of a usb sticks - ejectjs.bat:
::to eject specific dive by letter
call ejectjs.bat G
::to eject all drives that can be ejected
call ejectjs.bat *
A much better way that does not require windows media player and is not recognized by anti-virus programs (yet) .Must be saves with .bat extension:
@cScript.EXE //noLogo "%~f0?.WSF" //job:info %~nx0 %*
@exit /b 0
<job id="info">
<script language="VBScript">
if WScript.Arguments.Count < 2 then
WScript.Echo "No drive letter passed"
WScript.Echo "Usage: "
WScript.Echo " " & WScript.Arguments.Item(0) & " {LETTER|*}"
WScript.Echo " * will eject all cd drives"
WScript.Quit 1
end if
driveletter = WScript.Arguments.Item(1):
driveletter = mid(driveletter,1,1):
Public Function ejectDrive (drvLtr)
Set objApp = CreateObject( "Shell.Application" ):
Set objF=objApp.NameSpace(&H11&):
'WScript.Echo(objF.Items().Count):
set MyComp = objF.Items():
for each item in objF.Items() :
iName = objF.GetDetailsOf (item,0):
iType = objF.GetDetailsOf (item,1):
iLabels = split (iName , "(" ) :
iLabel = iLabels(1):
if Ucase(drvLtr & ":)") = iLabel and iType = "CD Drive" then
set verbs=item.Verbs():
set verb=verbs.Item(verbs.Count-4):
verb.DoIt():
item.InvokeVerb replace(verb,"&","") :
ejectDrive = 1:
exit function:
end if
next
ejectDrive = 2:
End Function
Public Function ejectAll ()
Set objApp = CreateObject( "Shell.Application" ):
Set objF=objApp.NameSpace(&H11&):
'WScript.Echo(objF.Items().Count):
set MyComp = objF.Items():
for each item in objF.Items() :
iType = objF.GetDetailsOf (item,1):
if iType = "CD Drive" then
set verbs=item.Verbs():
set verb=verbs.Item(verbs.Count-4):
verb.DoIt():
item.InvokeVerb replace(verb,"&","") :
end if
next
End Function
if driveletter = "*" then
call ejectAll
WScript.Quit 0
end if
result = ejectDrive (driveletter):
if result = 2 then
WScript.Echo "no cd drive found with letter " & driveletter & ":"
WScript.Quit 2
end if
</script>
</job>
A: Requiring administrator's rights is too abusing :)
I am using wizmo:
https://www.grc.com/WIZMO/WIZMO.HTM
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/19467792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How to validate a form with multiple checkboxes to have atleast one checked I'm trying to validate a form using the validate plugin for jquery. I want to require that the user check at least one checkbox in a group in order for the form to be submitted. Here's my jquery code:
$().ready(function() {
$("#subscribeForm").validate({
rules: { list: {required: "#list0:checked"} },
messages: { list: "Please select at least one newsletter"}
});
});
and here's the html form:
<form action="" method="GET" id="subscribeForm">
<fieldset id="cbgroup">
<div><input name="list" id="list0" type="checkbox" value="newsletter0" >zero</div>
<div><input name="list" id="list1" type="checkbox" value="newsletter1" >one</div>
<div><input name="list" id="list2" type="checkbox" value="newsletter2" >two</div>
</fieldset>
<input name="submit" type="submit" value="submit">
The problem is that the form submits even if nothing is checked. How can I resolve this?
A: The above addMethod by Lod Lawson is not completely correct. It's $.validator and not $.validate and the validator method name cb_selectone requires quotes. Here is a corrected version that I tested:
$.validator.addMethod('cb_selectone', function(value,element){
if(element.length>0){
for(var i=0;i<element.length;i++){
if($(element[i]).val('checked')) return true;
}
return false;
}
return false;
}, 'Please select at least one option');
A: Here is the a quick solution for multiple checkbox validation using jquery validation plugin:
jQuery.validator.addMethod('atLeastOneChecked', function(value, element) {
return ($('.cbgroup input:checked').length > 0);
});
$('#subscribeForm').validate({
rules: {
list0: { atLeastOneChecked: true }
},
messages: {
list0: { 'Please check at least one option' }
}
});
$('.cbgroup input').click(function() {
$('#list0').valid();
});
A: $('#subscribeForm').validate( {
rules: {
list: {
required: true,
minlength: 1
}
}
});
I think this will make sure at least one is checked.
A: This script below should put you on the right track perhaps?
You can keep this html the same (though I changed the method to POST):
<form method="POST" id="subscribeForm">
<fieldset id="cbgroup">
<div><input name="list" id="list0" type="checkbox" value="newsletter0" >zero</div>
<div><input name="list" id="list1" type="checkbox" value="newsletter1" >one</div>
<div><input name="list" id="list2" type="checkbox" value="newsletter2" >two</div>
</fieldset>
<input name="submit" type="submit" value="submit">
</form>
and this javascript validates
function onSubmit()
{
var fields = $("input[name='list']").serializeArray();
if (fields.length === 0)
{
alert('nothing selected');
// cancel submit
return false;
}
else
{
alert(fields.length + " items selected");
}
}
// register event on form, not submit button
$('#subscribeForm').submit(onSubmit)
and you can find a working example of it here
UPDATE (Oct 2012)
Additionally it should be noted that the checkboxes must have a "name" property, or else they will not be added to the array. Only having "id" will not work.
UPDATE (May 2013)
Moved the submit registration to javascript and registered the submit onto the form (as it should have been originally)
UPDATE (June 2016)
Changes == to ===
A: if (
document.forms["form"]["mon"].checked==false &&
document.forms["form"]["tues"].checked==false &&
document.forms["form"]["wed"].checked==false &&
document.forms["form"]["thrs"].checked==false &&
document.forms["form"]["fri"].checked==false
) {
alert("Select at least One Day into Five Days");
return false;
}
A: How about this:
$(document).ready(function() {
$('#subscribeForm').submit(function() {
var $fields = $(this).find('input[name="list"]:checked');
if (!$fields.length) {
alert('You must check at least one box!');
return false; // The form will *not* submit
}
});
});
A: Good example without custom validate methods, but with metadata plugin and some extra html.
Demo from Jquery.Validate plugin author
A: How about this
$.validate.addMethod(cb_selectone,
function(value,element){
if(element.length>0){
for(var i=0;i<element.length;i++){
if($(element[i]).val('checked')) return true;
}
return false;
}
return false;
},
'Please select a least one')
Now you ca do
$.validate({rules:{checklist:"cb_selectone"}});
You can even go further a specify the minimum number to select with a third param in the callback function.I have not tested it yet so tell me if it works.
A: I had to do the same thing and this is what I wrote.I made it more flexible in my case as I had multiple group of check boxes to check.
// param: reqNum number of checkboxes to select
$.fn.checkboxValidate = function(reqNum){
var fields = this.serializeArray();
return (fields.length < reqNum) ? 'invalid' : 'valid';
}
then you can pass this function to check multiple group of checkboxes with multiple rules.
// helper function to create error
function err(msg){
alert("Please select a " + msg + " preference.");
}
$('#reg').submit(function(e){
//needs at lease 2 checkboxes to be selected
if($("input.region, input.music").checkboxValidate(2) == 'invalid'){
err("Region and Music");
}
});
A: I had a slighlty different scenario. My checkboxes were created in dynamic and they were not of same group. But atleast any one of them had to be checked. My approach (never say this is perfect), I created a genric validator for all of them:
jQuery.validator.addMethod("validatorName", function(value, element) {
if (($('input:checkbox[name=chkBox1]:checked').val() == "Val1") ||
($('input:checkbox[name=chkBox2]:checked').val() == "Val2") ||
($('input:checkbox[name=chkBox3]:checked').val() == "Val3"))
{
return true;
}
else
{
return false;
}
}, "Please Select any one value");
Now I had to associate each of the chkbox to this one single validator.
Again I had to trigger the validation when any of the checkboxes were clicked triggering the validator.
$('#piRequest input:checkbox[name=chkBox1]').click(function(e){
$("#myform").valid();
});
A: I checked all answers and even in other similar questions, I tried to find optimal way with help of html class and custom rule.
my html structure for multiple checkboxes are like this
$.validator.addMethod('multicheckbox_rule', function (value, element) {
var $parent = $(element).closest('.checkbox_wrapper');
if($parent.find('.checkbox_item').is(':checked')) return true;
return false;
}, 'Please at least select one');
<div class="checkbox_wrapper">
<label for="checkbox-1"><input class="checkbox_item" id="checkbox-1" name="checkbox_item[1]" type="checkbox" value="1" data-rule-multicheckbox_rule="1" /> Checkbox_item 1</label>
<label for="checkbox-2"><input class="checkbox_item" id="checkbox-2" name="checkbox_item[2]" type="checkbox" value="1" data-rule-multicheckbox_rule="1" /> Checkbox_item 1</label>
</div>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/1535187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
}
|
Q: Error: Bad routine reference "OFFSET"; routine references in standard SQL views require explicit project IDs We have some views/queries saved in BigQuery project which gets pulled by our BI tool. Those views haven't changed for a while and this morning we saw intermittent errors below. It doesn't occur all the time. Sometimes, the same queries were able to compile and others not.
Error: Bad routine reference "OFFSET"; routine references in standard SQL views require explicit project IDs.
A: Google has fixed the issue: https://issuetracker.google.com/issues/112692348
I was able to run queries this morning using ordinal and offset with no issues.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/51879715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Partial sharing of git repositories I am new to git. I am wondering whether the following scenario is supported, and if so how (i.e. git commands for setup and update).
A repository is available from three different places: 'local', 'mirror' and 'github'. 'mirror' mirrors 'local' completely and 'github' mirrors 'local' except for a 'copyrighted' directory.
Thanks.
A: A submodule can work, but if you try to clone something that contains submodules for which one of the remotes is unavailable, you'll have aggravating errors.
My alternative would be to use the 'filter-branch' command to maintain a public branch that would omit the copyrighted files for public consumption on GitHub.
A: You could use the git submodule support to hold the "copyrighted" directory in a separate Git repository. Keep this separate repository somewhere accessible to people who should be able to see it, and don't push it to github. For people accessing the public repository, they would see a reference to a "copyrighted" repository but would be unable to populate it.
A: I think it is not possible.
What you can try is to put "copyrighted" directory in a separate branch which is not mirrored, but it will just make more hassle.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/278270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Difference between DataFrame.plot.line and scatter What is the difference between a "line" plot and a "scatter" plot when using DataFrame methods plot.line() and plot.scatter()?
Obviously there are some superficial differences such as line plots being connected by default, but you could turn that off with linestyle='none'.
A: From the document
DataFrame.plot.scatter : have parameters s and c , which can change the size and color of the point
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/57015700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Javascript image cropping different shapes I am starting work on an application that needs image cropping with various shapes for example : I upload an image and the crosshair selection that the user gets could be of the shape of a human face or a car or shape of love etc. Something similar to https://www.mystickerface.com/gettingstarted. Please take the time to upload a smaple image here.
Are there any out of the box javascripts libraries that I could use for this purpose. Jcrop and some others I saw only give rectangular selection.
Please let me know suggestions on how to go about the implementation.
A: It can be done with either html5 canvas or and old trick. If you want to "crop" irregular shapes, create an image and fill the irregular shape transparent and the rest same color as the background, then overlay the shape on top of the portion of the image you want to make irregular and if needed use a higher z-index.
Check out How to generate color variations of the product.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/12658831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Beaker Notebook plotting in Scala I am looking to learn how to plot from Scala into a Beaker cell.
I've seen the plotting API which takes a JSON object and renders a plot, however I cannot figure out how to leverage this functionality, or if there is a Java/Scala API injected into the Classpath I can leverage for this.
Does anyone have any help or pointers?
A: There is, as you suggested, a Java API in the class path. (You can see the automatic imports in the "Imports" section of the Scala or Java tab in the Language Manager.) If you choose "Class Documentation" under "Help" you will see the documentation for these API classes.
Bad news: there is no real documentation to speak of. There's method names and parameter types—basically what Javadoc will auto-generate for files that have no documentation. The tutorials have some examples using Groovy, which may help some if you can do the mental conversion to Java/Scala.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/37769971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Can Powerbuilder Graph labels show values instead of percentages? Using PB 12
I have a pie chart that breaks down project billings. It is showing percentages, but for me it would be more useful to show the values.
I am having a very hard time finding any resources on manipulating charts. I'd also like to be able to put more information into the legend, but again, not finding any guides to this.
Any help greatly appreciated,
Thanks,
A: To get the values instead of percentages :
*
*Edit the datawindow, select the "Text" tab
*Select "Pie Graph Labels" in the TextObject drop-down list
*In the "Display Expression" field type "value"
I'm using PB10.5, I hope it's same with 12.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/15033850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Windows phone debugging in device I am a windows phone app developer. My PC do not have enough graphics to support windows phone emulator. So I am developing application by using the windows phone device (HTC HD7) for debugging and testing from almost 5 months now. Now my device is so hanging and switch-off automatically sometimes. Is it bad for the device to use the developing purpose rather than using emulator ?. Is my device have problem because of the continuous use for developing ?
A: I think it is not a problem in using device for the developing purpose.
A: Looks like a fault in the device - I'd send it in for repair. I've certainly not heard of debugging causing issues with devices.
A: Do check if your internal storage is about getting full. Also if you have minimum RAM config, try not using multiple apps while debugging. Probably this should help.
And nonetheless, you can just visit a technician and get your phone thoroughly checked for issues.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/14452046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I query the latest save progress of a list of users. (php/html) I have a register and login site where people can click save progress on a form and it inserts a new row with their session id in column 2 as (user_id).
I also have an admin login where I can see their entries but it shows a list all their saves.
I wonder if you could help me figure out a query to list all the latest save progresses of each unique person? like (id,user_id,name,score): (3,3,bob, score 5) (6,4,sam, score 30) without showing all saves of a user's past saves like:
(1,3,bob, score 5) (2,4,sam, score 30) (3,3,bob, score 5) (4,4,sam, score 30)
I need the latest save of each user.
Like the latest id of a distinct list of user_id. Hope this makes sense. Thanks!
A: select user_id,name,score from your_table where (user_id,id) in (select user_id,max(id) from your_table group by user_id)
A: Considering the below formats for your tables
CREATE TABLE IF NOT EXISTS `user` (`user_id` int(11) NOT NULL auto_increment,
`user_name` varchar(200) collate latin1_general_ci NOT NULL,
PRIMARY KEY (`user_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci AUTO_INCREMENT=1;
CREATE TABLE IF NOT EXISTS `user_score` (
`id` int(11) NOT NULL auto_increment,
`user_id` int(11) NOT NULL,
`user_score` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci AUTO_INCREMENT=1;
Please try executing the below sql select query for retrieving latest score for each user
SELECT u . * ,s.user_score,s.id FROM user u JOIN user_score s ON u.user_id = s.user_id WHERE id IN (SELECT MAX( id ) FROM user_score GROUP BY user_id)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/13665381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Python - checking if a list is a subset of another list, if not, how do i split it? How do I find the minimal amount of lists that mylist is in, for example,
For the lists below, I can easily find that Sarah's animals all belong in house_animals using set(sarahs_animals) < set(house_animals)
However Johns animals needs split across zoo_animals and house_animals. John_animals could be split a number of ways e.g. it could also be house_animals, big_animals and bird_animals, how would I find the minimal amount of lists that it can be split across? Thanks
johns_animals = ['dog', 'cat', 'rhino', 'flamingo']
sarahs_animals = ['dog', 'cat']
house_animals = ['dog', 'cat', 'mouse']
big_animals = ['elephant', 'horse', 'rhino']
bird_animals = ['robin', 'flamingo', 'budgie']
zoo_animals = ['rhino', 'flamingo', 'elephant']
A: I believe this is a solution (Python3, but easily adaptable to Python2).
from itertools import combinations
johns_animals = {'dog', 'cat', 'rhino', 'flamingo'}
animal_sets = { 'house_animals': {'dog', 'cat', 'mouse'},
'big_animals': {'elephant', 'horse', 'rhino'},
'bird_animals': {'robin', 'flamingo', 'budgie'},
'zoo_animals': {'rhino', 'flamingo', 'elephant'}
}
def minimal_superset(my_set):
for n in range(1,len(animal_sets)+1):
for set_of_sets in combinations(animal_sets.keys(), n):
superset_union = set.union(*(animal_sets[i] for i in set_of_sets))
if my_set <= superset_union:
return set_of_sets
print(minimal_superset(johns_animals))
We go through all possible combinations of animal sets, returning the first combination that "covers up" the given set my_set. Since we start from smallest combinations, ie. consisting of one set, and advance to two-set, three-set etc., the first one found is guaranteed to be the smallest (if there are several possible combinations of the same size, just one of them is found).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/44281436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Importing CSV file PostgreSQL using pgAdmin 4 I'm trying to import a CSV file to my PostgreSQL but I get this error
ERROR: invalid input syntax for integer: "id;date;time;latitude;longitude"
CONTEXT: COPY test, line 1, column id: "id;date;time;latitude;longitude"
my csv file is simple
id;date;time;latitude;longitude
12980;2015-10-22;14:13:44.1430000;59,86411203;17,64274849
The table is created with the following code:
CREATE TABLE kordinater.test
(
id integer NOT NULL,
date date,
"time" time without time zone,
latitude real,
longitude real
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE kordinater.test
OWNER to postgres;
A: You can use Import/Export option for this task.
*
*Right click on your table
*Select "Import/Export" option & Click
*Provide proper option
*Click Ok button
A: You should try this it must work
COPY kordinater.test(id,date,time,latitude,longitude)
FROM 'C:\tmp\yourfile.csv' DELIMITER ',' CSV HEADER;
Your csv header must be separated by comma NOT WITH semi-colon or try to change id column type to bigint
to know more
A: I believe the quickest way to overcome this issue is to create an intermediary temporary table, so that you can import your data and cast the coordinates as you please.
Create a similar temporary table with the problematic columns as text:
CREATE TEMPORARY TABLE tmp
(
id integer,
date date,
time time without time zone,
latitude text,
longitude text
);
And import your file using COPY:
COPY tmp FROM '/path/to/file.csv' DELIMITER ';' CSV HEADER;
Once you have your data in the tmp table, you can cast the coordinates and insert them into the test table with this command:
INSERT INTO test (id, date, time, latitude, longitude)
SELECT id, date, time, replace(latitude,',','.')::numeric, replace(longitude,',','.')::numeric from tmp;
One more thing:
Since you're working with geographic coordinates, I sincerely recommend you to take a look at PostGIS. It is quite easy to install and makes your life much easier when you start your first calculations with geospatial data.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/49844570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Trying to download multiple data from yf.download I'm trying to download data from yfinance, if I use this it works:
data = yf.download("EURUSD=X" start="2020-01-01")
But if I try to add in another currency like this:
data = yf.download("EURUSD=X", "GBPUSD=X", start="2020-01-01")
I get this error:
data = yf.download("EURUSD=X", "GBPUSD=X", start="2020-01-01")
TypeError: download() got multiple values for argument 'start'
It works after adding:
data = yf.download(['EURUSD=X', 'GBPUSD=X'], start="2020-01-01", group_by='ticker')
But I'm now trying to scan the data for candle stick patterns like morning star or engulfing bar, here's the rest of the code:
engulf = talib.CDLENGULFING(data['Open'], data['High'], data['Low'],
data['Close'])
morning_star = talib.CDLMORNINGSTAR(data['Open'], data['High'],
data['Low'], data['Close'])
data['Morning_Star'] = morning_star
data['Engulfing'] = engulf
engulfing_day = data[data['Engulfing'] !=0]
morning_star = data[data['Morning_Star'] !=0]
print(engulfing_day)
print(morning_star)
But i'm getting this error now:
return self._engine.get_loc(casted_key)
File "pandas_libs\index.pyx", line 70, in pandas._libs.index.IndexEngine.get_loc
File "pandas_libs\index.pyx", line 101, in pandas._libs.index.IndexEngine.get_loc
File "pandas_libs\hashtable_class_helper.pxi", line 4554, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas_libs\hashtable_class_helper.pxi", line 4562, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'Open'
The above exception was the direct cause of the following exception:
A: As mentioned under the Mass download of market data: section of the blog posted by the Author/Maintainer of the pip package: https://aroussi.com/post/python-yahoo-finance, You need to pass tickers within a single string, though space separated:
>>> import yfinance as yf
>>> data = yf.download("EURUSD=X GBPUSD=X", start="2020-01-01")
[*********************100%***********************] 2 of 2 completed
>>> data.keys()
MultiIndex([('Adj Close', 'EURUSD=X'),
('Adj Close', 'GBPUSD=X'),
( 'Close', 'EURUSD=X'),
( 'Close', 'GBPUSD=X'),
( 'High', 'EURUSD=X'),
( 'High', 'GBPUSD=X'),
( 'Low', 'EURUSD=X'),
( 'Low', 'GBPUSD=X'),
( 'Open', 'EURUSD=X'),
( 'Open', 'GBPUSD=X'),
( 'Volume', 'EURUSD=X'),
( 'Volume', 'GBPUSD=X')],
)
>>> data['Close']
EURUSD=X GBPUSD=X
Date
2019-12-31 1.120230 1.311303
2020-01-01 1.122083 1.326260
2020-01-02 1.122083 1.325030
2020-01-03 1.117144 1.315270
2020-01-06 1.116196 1.308010
... ... ...
2021-02-08 1.204877 1.373872
2021-02-09 1.205360 1.374570
2021-02-10 1.211999 1.381799
2021-02-11 1.212121 1.383260
2021-02-12 1.209482 1.382208
[294 rows x 2 columns]
You can also use group_by='ticker' in case you want to traverse over the ticker instead of the Closing price/Volume etc.
data = yf.download("EURUSD=X GBPUSD=X", start="2020-01-01", group_by='ticker')
A: you can download the stock prices of multiple assets at once, by providing a list (such as [‘TSLA', ‘FB', ‘MSFT']) as the tickers argument.
try like this :
data = yf.download(['EURUSD=X','GBPUSD=X'], start="2020-01-01")
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/66173832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Failing to install psycopg2-binary on new docker container I have encountered a problem while trying to run my django project on a new Docker container.
It is my first time using Docker and I can't seem to find a good way to run a django project on it. Having tried multiple tutorials, I always get the error about psycopg2 not being installed.
requirements.txt:
-i https://pypi.org/simple
asgiref==3.2.7
django-cors-headers==3.3.0
django==3.0.7
djangorestframework==3.11.0
gunicorn==20.0.4
psycopg2-binary==2.8.5
pytz==2020.1
sqlparse==0.3.1
Dockerfile:
# pull official base image
FROM python:3.8.3-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy project
COPY . .
# set project environment variables
# grab these via Python's os.environ
# these are 100% optional here
ENV PORT=8000
ENV SECRET_KEY_TWITTER = "***"
While running docker-compose build, I get the following error:
Error: pg_config executable not found.
pg_config is required to build psycopg2 from source. Please add the directory
containing pg_config to the $PATH or specify the full executable path with the
option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
I will gladly answer any questions that might lead to the solution.
Also, maybe someone can recommend me a good tutorial on dockerizing django apps?
A: This scripts work on MacBook Air M1
Dockerfile
FROM ubuntu:20.04
RUN apt-get update && apt-get -y install libpq-dev gcc && pip install psycopg2
COPY requirements.txt /cs_account/
RUN pip3 install -r requirements.txt
requirements.txt
psycopg2-binary~=2.8.6
Updated answer from the answer of Zoltán Buzás
A: I made it work. This is the code:
FROM python:3.8.3-slim #Image python:3.9.5-slim also works # Image python:3.9.5-slim-buster also works
RUN apt-get update \
&& apt-get -y install libpq-dev gcc \
&& pip install psycopg2
A: This worked for me. Try slim-buster image.
In your Dockerfile
FROM python:3.8.7-slim-buster
and in your requirements.txt file
psycopg2-binary~= <<version_number>>
A: On Alpine Linux, you will need to compile all packages, even if a pre-compiled binary wheel is available on PyPI. On standard Linux-based images, you won't (https://pythonspeed.com/articles/alpine-docker-python/ - there are also other articles I've written there that might be helpful, e.g. on security).
So change your base image to python:3.8.3-slim-buster or python:3.8-slim-buster and it should work.
A: I added this to the top answer because I was getting other errors like below:
gcc: error trying to exec 'cc1plus': execvp: No such file or directory
error: command 'gcc' failed with exit status 1
and
src/pyodbc.h:56:10: fatal error: sql.h: No such file or directory
#include <sql.h>
This is what I did to fix this, so I am not sure how others were getting that to work, however maybe it was some of the other things I was doing?
My solution that I found from other posts when googling those two errors:
FROM python:3.8.3-slim
RUN apt-get update \
&& apt-get -y install g++ libpq-dev gcc unixodbc unixodbc-dev
A: I've made a custom image with
FROM python:alpine
ADD requirements.txt /
RUN apk update --no-cache \
&& apk add build-base postgresql-dev libpq --no-cache --virtual .build-deps \
&& pip install --no-cache-dir --upgrade pip \
&& pip install --no-cache-dir -r /requirements.txt \
&& apk del .build-deps
RUN apk add postgresql-libs libpq --no-cache
and requirements.txt
django
djangorestframework
psycopg2-binary
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/62715570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
}
|
Q: Which has more priority: || or && or == I have this expression:
y[i] = ( z[i] == a && b || c )
Which of these elements (&&, ||, ==) have the priority?
Can you please show the order of operations with brackets?
A: The priority list:
*
*==
*&&
*||
A: First ==, then &&, then ||.
Your expression will be evaluated as y[i] = (((z[i] == a) && b) || c).
https://docs.oracle.com/javase/tutorial/java/nutsandbolts/operators.html
A: The actual expression is evaluated as
y[i] = ( ((z[i] == a) && b) || c )
You probably want to look here for more info on operator precedence. https://docs.oracle.com/javase/tutorial/java/nutsandbolts/operators.html
A: This would be :
y[i] = ((( (z[i]) == a )&& b) || c)
ref: http://introcs.cs.princeton.edu/java/11precedence/
A: Here's the full list of ALL OPERATORS:
Full list of operators in Java
Got it from "Java ist auch eine Insel" = "Java is also an island"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/33583606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: Cocos2dx-android-Assert Error :ccArray.cpp function:ccArrayAppendObject line:120 Assert Error :ccArray.cpp function:ccArrayAppendObject line:120
this is the error when i am trying to execute a frame animation using this code :
CCArray *frames= CCArray::create();
for(int i=0 ; i<=21 ; i++)
{
CCString *frame=CCString::createWithFormat("mypong%04d.png",i);
frames->addObject(CCSpriteFrameCache::sharedSpriteFrameCache()->spriteFrameByName(frame->getCString()));
}
sprite->runAction(CCAnimate::create(CCAnimation::create(frames,.01)));
}
its under TouchesBegan method.Anyone knows what i am doing wrong here ?
NOTE: i am on win7 64-bit ,cocos2dx 2.0.1, ndk r8b
A: Seems there is no CCSpriteFrame named mypong%04d.png in CCSpriteFrameCache. You might have ran CCSpriteFrameCache::sharedSpriteFrameCache()->removeUnusedSpriteFrames() or something simillar before.
Or you are missing .png files in your project folder so they failed to add into CCSpriteFrameCache
A: okay the problem was that my spritesheet got corrupted or something weird happened with it..it does not contain images from frame 10 to 15 ...dont know what happened to it...there are five black images in the spritesheet !!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/16040101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to combine Django FormSet and QuerySet in view and template? I have Django 1.6 application that will contain a form that displays a list of customers with a pair of "Approve" and "Reject" radio buttons next to each name to indicate if we will approve or reject their method of payment. I want each radio button to be set by default to "Reject" when the form is first rendered. I also want to include a hidden "uid" field on each line that contains the customer's user ID. When the admin clicks the Approve button next to each username s/he wants to approve and then submits the form, the view should read each hidden id value for each user, inspect the radio button, and update the the model if the user is approved. Here's what the form will look like:
customer1 (hidden id) [ ] approve [x] reject
customer2 (hidden id) [ ] approve [x] reject
...
customerN (hidden id) [ ] approve [x] reject
I have three problems that I don't quite understand how to solve:
*
*How do you combine the Queryset that contains my customer usernames and IDs with the FormSet that will contain the radio button pair for each queryset object? I'm pretty sure I need to use a FormSet to hold the radio buttons and I think I need to set the formset's "initial" value to the queryset but I can't get them to "connect" so that the form looks like what I've shown above. I don't see my account queryset objects when I do a "view source" in my browser.
*How do you connect the customer's ID that comes from the Account model's user column via the new_accounts query set to the uid field in the form?
*How do you iterate through the submitted formset and pull out the user IDs and the radio button objects for inspection?
I'm really having a hard time wrapping my head around these tasks. Thanks very much for your help.
# views.py
def review_payment_methods(request, template):
if request.method == "POST":
payment_method_form = ReviewPaymentMethodForm(request.POST)
if payment_method_form.is_valid():
# How to iterate through form and pull out ids and radio button values??
# Update Account table here
return HttpResponseRedirect('/admin/')
else:
new_accounts = Account.objects.filter(method_approved=False).values()
PaymentMethodFormset = formset_factory(ReviewPaymentMethodForm, extra=new_accounts.count())
formset = PaymentMethodFormset(initial=new_accounts) # This doesn't seem to work
return render_to_response(template, locals(), context_instance=RequestContext(request))
# models.py
class Account(models.Model):
"""A user's account."""
user = models.OneToOneField(User, primary_key=True, unique=True)
method_approved = models.BooleanField(default=False) # This contains Approve/Reject
# forms.py
from django import forms
from django.utils.safestring import mark_safe
class ReviewPaymentMethodForm(forms.Form):
class HorizontalRadioRenderer(forms.RadioSelect.renderer):
def render(self):
return mark_safe(u'\n'.join([u'%s\n' % w for w in self]))
DECISION_CHOICES = (('1', 'Approve'), ('2', 'Reject'))
uid = forms.IntegerField(widget=forms.HiddenInput)
decision = forms.ChoiceField(
choices = DECISION_CHOICES,
widget = forms.RadioSelect(renderer=HorizontalRadioRenderer),
initial = '2', # 1 => Approve, 2 => Reject
)
# review_payment_methods.html
<div class="custom-content">
<h1>Review Payment Methods</h1>
<form action="." method="post">{% csrf_token %}
{% for form in formset %}
{{ form.as_p }}
{% endfor %}
<input type="submit" value="Submit" />
</form>
</div>
A: This isn't really a job for formsets. You want a single form with a dynamic set of fields; each field's name is the customer UID and its value is accept or reject. To do that, you can create the fields programmatically when instantiating the form:
class ReviewPaymentMethodForm(forms.Form):
def __init__(self, *args, **kwargs):
accounts = kwargs.pop('accounts')
super(ReviewPaymentMethodForm, self).__init__(*args, **kwargs)
for account in accounts:
self.fields[str(account.id)] = forms.ChoiceField(
label=account.user.username,
choices=DECISION_CHOICES,
widget=forms.RadioSelect(renderer=HorizontalRadioRenderer),
initial='2', # 1 => Approve, 2 => Reject
)
And the view becomes:
def review_payment_methods(request, template):
new_accounts = Account.objects.filter(method_approved=False)
if request.method == "POST":
payment_method_form = ReviewPaymentMethodForm(request.POST, accounts=new_accounts)
if payment_method_form.is_valid():
for acc_id, value in payment_method_form.cleaned_data.items():
approved = (value == '1')
Account.objects.filter(pk=acc_id).update(method_approved=approved)
return HttpResponseRedirect('/admin/')
else:
form = ReviewPaymentMethodForm(accounts=new_accounts)
return render(request, template, {'form': form})
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/28561368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Why is Cake PHP bake creating views containing fields from the wrong table? QUESTION
Why is Cake PHP bake creating views containing fields from the wrong table and how can I get it to use the correct table?
BACKGROUND
I have a cake:
v2.5.1
... and a table:
CREATE TABLE IF NOT EXISTS `user_quotes` (
`id` int(11) NOT NULL,
`user_id` int(11) NOT NULL,
`message` text COLLATE utf8_general_mysql500_ci NOT NULL,
`expires` datetime NOT NULL,
`deleted` tinyint(1) NOT NULL,
`deleted_date` datetime NOT NULL,
`modified` datetime NOT NULL,
`created` datetime NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_general_mysql500_ci COMMENT='UserQuotes' AUTO_INCREMENT=1 ;
... and its model:
class UserQuote extends AppModel
{
}
... and I bake like so (from htdocs/app/):
./Console/cake bake
... and I'm careful to select the UserQuotes controller from the bake menu.
ERROR
The view has the correct title for the UserQuotes table but the included fields and data are from the User model (and users table) instead of from the UserQuotes model (and thus user_quotes table).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/33765314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Converting Time Format without using strptime I've been tasked with printing yesterday's, today's and tomorrow's date. The task itself is very simple, but i wanted to also change the way the date is displayed. I would like to display the date as day/month/year
I've tried the ways proposed online but they don't work for me, fex. strptime apparently cannot be an attribute for datetime whenever i try doing it.
below is my code so far with the broken bits taken out again.
#data is imported from module
import datetime
#today defined as the value assigned to current day
today = datetime.date.today()
#yesterday calculated by subtracting 'one day'. .timedelta() is used to go back 1 day. just subtracting one would allow for invaldid dates. such as the 0th of a month
yesterday = today - datetime.timedelta(days = 1)
#.timedelta() used to avoid displayng an invalid date such as the 32nd of a month. 1 day is added to define the variable 'tomorrow'
tomorrow = today + datetime.timedelta(days = 1)
#here the variables are printed
print("Yesterday : ", yesterday)
print("Today : ", today)
print("Tomorrow : ", tomorrow)
A: I'm not sure why you don't want to use strftime, but if you absolutely wanted a different way, try altering your last three lines to this:
print(f"Yesterday : {yesterday.day}/{yesterday.month}/{yesterday.year}")
print(f"Today : {today.day}/{today.month}/{today.year}")
print(f"Tomorrow : {tomorrow.day}/{tomorrow.month}/{tomorrow.year}")
which produces:
Yesterday : 9/11/2022
Today : 10/11/2022
Tomorrow : 11/11/2022
You could make it a little more compact like this:
days = {'yesterday' : yesterday, 'today' : today, 'tomorrow' : tomorrow}
for daystr, day in days.items():
print (f"{daystr.title()} : {day.day}/{day.month}/{day.year}")
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/74386807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: PayPal Checkout Integration with Smart Payment Buttons I'm currently working with the PHP framework Codeigniter 4.0.4 and trying to add the PayPal Checkout Intergration with Smart Payment Buttons.
I used the PayPal API as an example, but I always get an error message when I try to create an order.
When I click on the PayPal button to pay, the window opens for 1-2 seconds and then closes again immediately.
Console error:
Uncaught SyntaxError: Unexpected token < in JSON at position 0
Error: Unexpected token < in JSON at position 0
at $t.error (https://www.paypal.com/sdk/js?client-id=Af6lu4xavdi1_e_hEFLWQxUj48hq0bndx7o3RGgwNWuChHmenioXFLMnTOKt912F6zmftF1Siv9WsfCp&disable-funding=credit,card:2:59754)
at Object.<anonymous> (https://www.paypal.com/sdk/js?client-id=Af6lu4xavdi1_e_hEFLWQxUj48hq0bndx7o3RGgwNWuChHmenioXFLMnTOKt912F6zmftF1Siv9WsfCp&disable-funding=credit,card:2:67521)
at JSON.parse (<anonymous>)
at o (https://www.paypal.com/sdk/js?client-id=Af6lu4xavdi1_e_hEFLWQxUj48hq0bndx7o3RGgwNWuChHmenioXFLMnTOKt912F6zmftF1Siv9WsfCp&disable-funding=credit,card:2:67380)
at cr (https://www.paypal.com/sdk/js?client-id=Af6lu4xavdi1_e_hEFLWQxUj48hq0bndx7o3RGgwNWuChHmenioXFLMnTOKt912F6zmftF1Siv9WsfCp&disable-funding=credit,card:2:67533)
at Cr.u.on (https://www.paypal.com/sdk/js?client-id=Af6lu4xavdi1_e_hEFLWQxUj48hq0bndx7o3RGgwNWuChHmenioXFLMnTOKt912F6zmftF1Siv9WsfCp&disable-funding=credit,card:2:72204)
at Cr (https://www.paypal.com/sdk/js?client-id=Af6lu4xavdi1_e_hEFLWQxUj48hq0bndx7o3RGgwNWuChHmenioXFLMnTOKt912F6zmftF1Siv9WsfCp&disable-funding=credit,card:2:72341)
at https://www.paypal.com/sdk/js?client-id=Af6lu4xavdi1_e_hEFLWQxUj48hq0bndx7o3RGgwNWuChHmenioXFLMnTOKt912F6zmftF1Siv9WsfCp&disable-funding=credit,card:2:78460
at Function.n.try (https://www.paypal.com/sdk/js?client-id=Af6lu4xavdi1_e_hEFLWQxUj48hq0bndx7o3RGgwNWuChHmenioXFLMnTOKt912F6zmftF1Siv9WsfCp&disable-funding=credit,card:2:14069)
at https://www.paypal.com/sdk/js?client-id=Af6lu4xavdi1_e_hEFLWQxUj48hq0bndx7o3RGgwNWuChHmenioXFLMnTOKt912F6zmftF1Siv9WsfCp&disable-funding=credit,card:2:78257
Serverside:
$clientId = getenv('paypal.CLIENT_ID');
$clientSecret = getenv('paypal.CLIENT_SECRET');
$environment = new SandboxEnvironment($clientId, $clientSecret);
$client = new PayPalHttpClient($environment);
$request = new OrdersCreateRequest();
$request->prefer('return=representation');
$request->body = [
"intent" => "CAPTURE",
"purchase_units" => [[
'reference_id' => '123',
"amount" => [
"value" => 10,
"currency_code" => "USD"
]
]],
"application_context" => [
"cancel_url" => base_url() . "/checkout",
"return_url" => base_url() . "/checkout"
]
];
try {
$response = $client->execute($request);
return json_encode($response);
} catch (HttpException $ex) {
echo $ex->statusCode;
print_r($ex->getMessage());
}
Clientside:
<script type="text/javascript">
paypal.Buttons({
env: 'sandbox',
style: {
layout: 'vertical',
size: 'responsive',
shape: 'pill',
color: 'blue',
label: 'pay'
},
createOrder: function() {
return fetch('/checkout/paypal', {
method: 'post',
headers: {
'content-type': 'application/json'
}
}).then(function(response) {
return response.json();
}).then(function(resJson) {
return resJson.result.id;
});
}
}).render('#paypal-button-container');
</script>
A: Try
<script type="text/javascript">
paypal.Buttons({
env: 'sandbox',
style: {
layout: 'vertical',
size: 'responsive',
shape: 'pill',
color: 'blue',
label: 'pay'
},
createOrder: function() {
return fetch('/checkout/paypal', {
method: 'post',
headers: {
'content-type': 'application/json'
}
}).then(function(response) {
console.log(response);
});
}
}).render('#paypal-button-container');
</script>
what will return on Console
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/64338833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Using CASE function in SQL I have a table like:
2 Savita EC1 50
2 Savita EC2 55
2 Savita EC3 28
2 Savita EC4 30
2 Savita EC5 60
3 Abhi EC1 40
3 Abhi EC2 38
3 Abhi EC3 35
3 Abhi EC4 45
3 Abhi EC5 37
4 Priya EC1 60
4 Priya EC2 49
4 Priya EC3 26
4 Priya EC4 29
4 Priya EC5 44
5 Shanthi EC1 70
5 Shanthi EC2 19
5 Shanthi EC3 45
5 Shanthi EC4 44
5 Shanthi EC5 50
6 Harish EC1 60
6 Harish EC2 64
6 Harish EC3 26
6 Harish EC4 28
6 Harish EC5 29
I want to add grace marks and the condition is
total grace marks per student is max of 6 marks, if the candidate has failed in two subs like 28 in EC1 and 27 in EC2, then after adding grace EC1=30 and EC2=30. and he is pass. if he has 25 marks in EC1 and 28 marks in EC2 , then grace mark needed is 5+2=7 . hence he is fail and no grace marks added, Grace marks can be added to max two subjects. if he failed in more than 2 subjects then he is failed no grace marks added.
I have a proc Like
create procedure SP_student3
as
begin
select FstudentName,EC1,EC2,EC3,EC4,EC5,TOTALMARKS, CASE
WHEN EC1<30 THEN 'FAIL'
WHEN EC2<30 THEN 'FAIL'
WHEN EC3<30 THEN 'FAIL'
WHEN EC4<30 THEN 'FAIL'
WHEN EC5<30 THEN 'FAIL'
ELSE 'PASS' END AS RESULT FROM(select FstudentName,EC1,EC2,EC3,EC4,EC5,TOTALMARKS=EC1+EC2+EC3+EC4+EC5 FROM Student PIVOT(SUM(FMarks) for Fsubject in ([EC1],[EC2],[EC3],[EC4],[EC5],TOTALMARKS]))ASPIVOTTABLE )B end
Which gives out put as
Abhi 40 38 35 45 37 195 PASS
Harish 60 64 26 28 29 207 FAIL
Priya 60 49 26 29 44 208 FAIL
Savita 50 55 28 30 60 223 FAIL
Shanthi 70 19 45 44 50 228 FAIL
A: If my assumptions are correct, you're looking for this:
SELECT A.StudentName, EC1,EC2,EC3,EC4,EC5,Total,
case when failures > 6 or subjects > 2 then 'Failure'
else 'Pass' end as Result
FROM
( SELECT StudentName, EC1, EC2, EC3, EC4, EC5
FROM Student
PIVOT(sum(Marks) for subject in([EC1],[EC2],[EC3],[EC4],[EC5]))as pt) A,
( select
studentName,
sum(case when Marks < 30 then 30 - Marks else 0 end) as failures,
sum(case when Marks < 30 then 1 else 0 end) as subjects,
sum(marks) as total
from
student
group by
studentname
) B
where
A.StudentName = B.StudentName
Which is pretty close to the answer in your previous question
SQL Fiddle
Edit: Added the check for 2 subjects, although the test data does not contain any cases like that.
A: I took a slightly different approach to this problem. If you add two columns to your table - "GraceMarks" (int) and Pass (bit), you can do this with a number of sequential and logical update queries and a simple select in the end. I had this in a table called "test", but you can change this of course to "student".
-- Find if students passed or failed the year
-- Assume students are all intitally flagged to Pass, and gracemarks are zero.
update test SET Pass=1, GraceMarks=0
-- First, any mark <=23 is a fail
update test SET Pass=0 where Mark<=23
-- if a student failed one subject, they fail them all
update test SET Pass=0 where StudentID in (
SELECT StudentID
from dbo.test
where Pass=0)
-- Next work out how many grace marks would be needed to pass each subject
update test SET GraceMarks=30-Mark where Mark<30 and Pass=1
-- If a student used more that a total of 6 grace marks, they failed too
update test SET Pass=0 where StudentID in (
SELECT StudentID
FROM dbo.Test
GROUP BY StudentID
HAVING (SUM(GraceMarks) > 6))
-- If they used grace marks in 3 or more subjects ... fail
update test SET Pass=0 where StudentID in (
SELECT StudentID
FROM dbo.Test
WHERE (GraceMarks > 0)
GROUP BY StudentID
HAVING (COUNT(GraceMarks) > 2))
-- Now show results
select StudentID, StudentName,
sum(case course when 'EC1' then mark end) as EC1,
sum(case course when 'EC2' then mark end) as EC2,
sum(case course when 'EC3' then mark end) as EC3,
sum(case course when 'EC4' then mark end) as EC4,
sum(case course when 'EC5' then mark end) as EC5,
SUM(mark) as totalMark,
CASE Pass WHEN 0 THEN 'Fail' ELSE 'Pass' END AS YearPassorFail
from dbo.Test
group by StudentID, StudentName, CASE Pass WHEN 0 THEN 'Fail' ELSE 'Pass' END
order by StudentName
The output of this code is this table --
3 Abhi 40 38 35 45 37 195 Pass
6 Harish 60 64 26 28 29 207 Fail
4 Priya 60 49 26 29 44 208 Pass
2 Savita 50 55 28 30 60 223 Pass
5 Shanthi 70 19 45 44 50 228 Fail
You could add an additional column called "reason" and change the queries above to include the reason why they failed (Harish because 3 subjects <30 or needing 7 grace marks, Shanthi because one subject below 24).
The advantage of this approach is that you can see what is happening at each step, and you have a record of the Pass/Fail and grace marks used. You could then write queries finding out things like - how many grace marks were used? Who would have failed if the grace marks were only allowed at 3? etc. I also personally like lots of simple steps than one huge complex one, but that might just be me.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31085714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-3"
}
|
Q: Vue.js - Getting data from multiple keys in an object I'm trying to set something up in my app where I can select an option from a list and change the background of the app depending on what's selected.
Let's say I have a list like:
<li v-for="item in items">
<label class="radio">
<input type="radio" value="{{ item.name }}" v-model="itemSelection">
{{ item.name }}
</label>
</li>
items is an array that's stored in my store.js:
items: [
{name: 'item1', img: 'placehold.it/200x200-1'}
{name: 'item2', img: 'placehold.it/200x200-2'}
{name: 'item3', img: 'placehold.it/200x200-3'}
],
So when you select item1 I want to not only pull the name from the selection (which gets passed up to the parent component in itemSelection to display there) but also the img link to place that in css to change the background of the body. I'm not entirely sure how to go about this, as I'm pretty new to vue and this is basically something I'm building to help me learn!
Thanks!
A: You can do this by several ways e.g:
watch : {
itemSelection: function(val) { ... }
}
There is some examples. Check this fiddle
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/39417390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Xamarin Forms Messaging Center to one JSON I want to use Messaging Center for looping through a form to save as a JSON.
I have 2 Accordion views on my MainPage.xaml and inside the view are labels/checkboxes/entry.
when I press the save button in my accordion view it performs this code which saves as a json:
public class InspectionSchemeChecks
{
public string InspectionCategory { get; set; }
public List<InsScheme> InsScheme { get; set; }
}
public class InsScheme
{
public string InspectionName { get; set; }
public string Yes { get; set; }
public string No { get; set; }
public string InspectionNotes { get; set; }
}
public void SaveButton_Clicked(object sender, EventArgs e)
{
MessagingCenter.Subscribe<MainPage>(this, "Hi", (JsonSend) => {
// do something whenever the "Hi" message is sent
var SchemeChecks = new InspectionSchemeChecks();
var InspectionList = new List<InsScheme>();
SchemeChecks.InspectionCategory = HeaderLabel.Text;
InspectionList.Add(new InsScheme()
{
InspectionName = TempLabel.Text,
Yes = TempYCheckBox.Checked.ToString(),
No = TempNCheckBox.Checked.ToString(),
InspectionNotes = TempNotes.Text
});
InspectionList.Add(new InsScheme()
{
InspectionName = OilLevelLabel.Text,
Yes = OilLevelYCheckBox.Checked.ToString(),
No = OilLevelNCheckBox.Checked.ToString(),
InspectionNotes = OilLevelNotes.Text
});
InspectionList.Add(new InsScheme()
{
InspectionName = RefrigerantLevelLabel.Text,
Yes = RefrigerantLevelYCheckBox.Checked.ToString(),
No = RefrigerantLevelNCheckBox.Checked.ToString(),
InspectionNotes = RefrigerantLevelNotes.Text
});
InspectionList.Add(new InsScheme()
{
InspectionName = VICLabel.Text,
Yes = VICYCheckBox.Checked.ToString(),
No = VICNCheckBox.Checked.ToString(),
InspectionNotes = VICNotes.Text
});
InspectionList.Add(new InsScheme()
{
InspectionName = CheckEvaporatorLabel.Text,
Yes = CheckEvaporatorYCheckBox.Checked.ToString(),
No = CheckEvaporatorNCheckBox.Checked.ToString(),
InspectionNotes = CheckEvaporatorNotes.Text
});
InspectionList.Add(new InsScheme()
{
InspectionName = SPLabel.Text,
Yes = SPYCheckBox.Checked.ToString(),
No = SPNCheckBox.Checked.ToString(),
InspectionNotes = SPNotes.Text
});
InspectionList.Add(new InsScheme()
{
InspectionName = DPLabel.Text,
Yes = DPYCheckBox.Checked.ToString(),
No = DPNCheckBox.Checked.ToString(),
InspectionNotes = DPNotes.Text
});
SchemeChecks.InsScheme = InspectionList;
var json = JsonConvert.SerializeObject(SchemeChecks, Newtonsoft.Json.Formatting.Indented);
});
}
in my MainPage.xaml.cs I have a submit button that has this line of code to call the message MessagingCenter.Send(this, "Hi"); When I click that I want to combine the JSON together could I use the messagingCentre for that if so could I have guidance
A: Something looks weird in your MessagingCenter implementation, you're subscribing every time the Save button is clicked, which is wrong. You usually only subscribe once and unsubscribe when you're not interested in receiving messages anymore.
Also, I assume you're converting to json because you taught we can only pass string as message? If so, this is not the case, we can pass any object as message.
I'm not sure I understand your implementation, I've created a simple implementation that shows how to pass a InspectionSchemeChecks object between two classes.
public partial class MainPage : ContentPage
{
public MainPage()
{
InitializeComponent();
}
protected override void OnAppearing()
{
base.OnAppearing();
MessagingCenter.Subscribe<SomeOtherClass, InspectionSchemeChecks>(this, "InspectionSchemeChecks", OnNewMessage);
}
protected override void OnDisappearing()
{
MessagingCenter.Unsubscribe<SomeOtherClass, InspectionSchemeChecks>(this, "InspectionSchemeChecks");
base.OnDisappearing();
}
private void OnNewMessage(SomeOtherClass sender, InspectionSchemeChecks schemeChecks)
{
// Do what you want
}
}
public class SomeOtherClass
{
public void SaveButton_Clicked(object sender, EventArgs e)
{
var SchemeChecks = new InspectionSchemeChecks();
var InspectionList = new List<InsScheme>();
//...
SchemeChecks.InsScheme = InspectionList;
MessagingCenter.Send<SomeOtherClass, InspectionSchemeChecks>(this, "InspectionSchemeChecks", SchemeChecks);
}
}
You can easily modify this example to send a string instead of InspectionSchemeChecks object as argument if you prefer to send the json as message.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/54091509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: problem with my code a variable that cannot be used with < o > power=input("How much power would you like to have?(power goes from 1 to a 100)")
while power > 100 or power < 0:
if power < 100 or power > 0:
break
else:
power=input("How much power would you like to have?")
when i try to run this part of a code it keeps showing an error message that looks like: while puissance > 100 or puissance < 0:
TypeError: '>' not supported between instances of 'str' and 'int'
A: The input function returns a string (str). To convert it to an int you need to use the int function:
power = int(input("How much power would you like to have?(power goes from 1 to a 100)"))
Note that int() will raise a ValueError if the string the user inputs isn't one that can be interpreted as an integer.
If you want to repeatedly prompt the user until they provide a valid value, use a loop with a try/except:
while True:
try:
power = int(input(
"How much power would you like to have? (power goes from 1 to 100)"
)) # raises ValueError if not an int
assert 1 <= power <= 100 # raises AssertionError if not in range
except (AssertionError, ValueError):
continue # prompt again
else:
break # continue on with this power value
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/61855570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: BASH: Pause and resume a child script I want to control a child script somehow. I am making a master script which spawns many children scripts and need to RESUME and PAUSE them on demand.
Child
Do stuff
PAUSE
Cleanup
Parent
sleep 10
RESUME child
Is this possible?
AS PER SUGGESTIONS
Trying to do it with signals while the child runs in the background doesn't seem to work.
script1:
#!/bin/bash
"./script2" &
sleep 1
kill -2 "$!"
sleep 1
script2:
#!/bin/bash
echo "~~ENTRY"
trap 'echo you hit ctrl-c, waking up...' SIGINT
trap 'echo you hit ctrl-\, stoppng...; exit' SIGQUIT
while [ 1 ]
do
echo "Waiting for signal.."
sleep 60000
echo "~~EXIT1"
done
echo "~~EXIT2"
Running:
> ./script1
A: One way to control individual process scripts is with signals. If you combine SIGINT (ctrl-c) to resume with SIGQUIT (ctrl-) to kill then the child process looks like this:
#!/bin/sh
trap 'echo you hit ctrl-c, waking up...' SIGINT
trap 'echo you hit ctrl-\, stoppng...; exit' SIGQUIT
while (true)
do
echo "do the work..."
# pause for a very long time...
sleep 600000
done
If you run this script, and hit ctrl-c, the work continues. If you hit ctrl-\, the script stops.
You would want to run this in the background then send kill -2 $pid to resume and kill -3 $pid to stop (or kill -9 would work) where $pid is the child process's process id.
Here is a good bash signals reference: http://www.ibm.com/developerworks/aix/library/au-usingtraps/
-- here is the parent script...
#!/bin/sh
./child.sh &
pid=$!
echo "child running at $pid"
sleep 2
echo "interrupt the child at $pid"
kill -INT $pid # you could also use SIGCONT
sleep 2
echo "kill the child at $pid"
kill -QUIT $pid
A: One way is to create a named pipe per child:
mkfifo pipe0
Then redirect stdin of the child to read from the pipe:
child < pipe0
to stop the child:
read _
(the odd _ is just there for read to have a place to store the empty line it will read).
to resume the child:
echo > pipe0
A more simple approach would be to save the stdin which gets passed to the child in form a pure file descriptor but I don't know the exact syntax anymore and can't google a good example ATM.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/25505613",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Module Repeat on selected pages How to repeat HTML module with content on selected pages?
Not on all pages
I know there is an option to display module on all pages under module setting.
A: Use the "Add Existing Module" function to handle this.
For this First of all add HTML module on page.
To add that HTML module on another page use Add Existing Module instead of Add New Module.
You can choose page from drop down on which you added HTML module.
Then add to new page. Content will also be placed.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/24908635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: OneToMany mapping not working in Spring Boot + Hibernate I am working on Spring Boot + Hibernate application. I wanted to know how I can save OneToMany relationship including the foreign key as well.
I have 2 classes Employee and Address that has OneToMany relationship.
@Entity
public class Employee {
@Id
@GeneratedValue
private int empId;
private String name;
private String designation;
@OneToMany(cascade = CascadeType.ALL, mappedBy = "employee",fetch=FetchType.EAGER)
private List<Address> addresses = new ArrayList<>();
public List<Address> getAddresses() {
return addresses;
}
public void setAddresses(List<Address> addresses) {
this.addresses = addresses;
}
}
@Entity
public class Address {
@Id
@GeneratedValue
private int addressID;
private String city;
private String country;
@ManyToOne(cascade = CascadeType.ALL,fetch=FetchType.LAZY)
@JoinColumn(name = "empId")
private Employee employee;
public Employee getEmployee() {
return employee;
}
public void setEmployee(Employee employee) {
this.employee = employee;
}
}
Dao Code :
@Repository
public class EmployeeDao {
@Autowired
private SessionFactory sessionFactory;
public Employee save(Employee employee) {
Session session = getSession();
session.persist(employee);
return employee;
}
private Session getSession() {
return sessionFactory.getCurrentSession();
}
}
When I am trying to save data using json Post request as :
{
"name": "Kamal Verma",
"designation": "SAL1",
"addresses": [
{
"city": "Noida",
"country": "India"
}
]
}
Its working fine without any error but in database in Address table EMPID is null.
Kindly, please let me know what I am doing wrong as foreign key in Address table is not getting saved.
Thanks,
Kamal
A: You have to set the 'employee' field in your Address class.
Update the setter in your Employee class to:
public void setAddresses(List<Address> addresses) {
this.addresses.clear();
for (Address address: addresses) {
address.setEmployee(this);
this.addresses.add(address);
}
}
A: Before saving the Employee object, All address objects has to set the employ reference so then hibernate can populate empId in address tale.
@Autowired
private SessionFactory sessionFactory;
public Employee save(Employee employee) {
Session session = getSession();
for(Address address: employee.getAddresses()){
address.setEmployee(employee); //Set employee to address.
}
session.persist(employee);
return employee;
}
private Session getSession() {
return sessionFactory.getCurrentSession();
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43617244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Handling token timeout using providers in Ionic 2 I have an Ionic 2 app with two providers, one related with OAuth authentication, and another with App services. When the app token expires, App service calls will return 401 errors
I would like a behavior where 401 errors are handled globally such that the OAuth provider will call the refresh token endpoint and update the token transparently
To achieve the above, I cannot directly handle errors in the App provider invoking the OAuth provider. I was thinking to override ErrorHandler class to handle 401 calls by invoking the OAuth provider and updating the local app token variables
Is that an architecturally sound decision? Would there be a preferred approach to implement the above?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/41367265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Deploy front-end angular app at with json-server Cloud9 I'm new to web development and just finished Angular course at Coursera.
Everything was OK to my course project app until I have decided to deploy it at Cloud9. So the app doesn't have back-end and takes data from a simple db.json file which I was running on my computer with the json-server at localhost:3000.
I have cloned my git repo to the Cloud9, installed all dependencies and thought that the procedure with json-server will be the same and it will serve json data at the server, but it's looking that I was wrong.
I think I missed something and asking for explanation of my problem.
Thank you guys.
A: If you're looking to develop on Cloud9, you'll need to make sure you use process.env.IP instead of localhost and process.env.PORT (or port 8080) instead of 3000.
That being said, Cloud9 is not a hosting solution. If you use it as such, your account will be deactivated. Consider something like Heroku for deployment.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/35336848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Executing Store Procedure in Class in C#, ASP.NET I have one class named e.g abc.cs. I am using custom function for searching criteria like this:
public System.Data.DataTable myFunction(string SearchText, string ColumnName, string
SearchCriteria)
{
// ColumnName is the name of the column in db table as shown below in
// query e.g a , b , c , d, e
try
{
string strQuery = "SELECT a,b,c,d,e FROM myTable ";
SearchText = SearchText.Trim().Replace("[", "[[]");
SearchText = SearchText.Trim().Replace("'", "''");
SearchText = SearchText.Trim().Replace("%", "[%]");
SearchText = SearchText.Trim().Replace("_", "[_]");
if (SearchText != "")
{
strQuery += " where " + ColumnName + " LIKE ";
if (SearchCriteria == "x")
{
strQuery += "'" + SearchText + "%'";
}
else if (SearchCriteria == "y")
{
strQuery += "'%" + SearchText + "'";
}
else if (SearchCriteria == "z")
{
strQuery += "'%" + SearchText + "%'";
}
else
{
strQuery += "'" + SearchText + "'";
}
}
strQuery += "ORDER BY b";
}
catch (Exception E)
{
}
}
The store procedue which I have tried so far:
USE [dbName]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[abc]
@SearchText nvarchar(100)
AS
BEGIN
SELECT a,b,c,d,e FROM myTable ;
-- what should be the criteria here.
END
GO
I am stuck at point how to use searchText in conditions in store procedure, and than what is the way to use searchText in myFunction.
A: you can use this...
USE [dbName]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[abc]
@SearchText nvarchar(100)
AS
BEGIN
DECLARE @sql nvarchar(4000)
SET @sql = 'SELECT a,b,c,d,e FROM myTable ' + @SearchText
-- what should be the criteria here.
EXEC sp_executesql @sql
END
GO
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/21454574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: How to prevent multiple definitions in C? I'm a C newbie and I was just trying to write a console application with Code::Blocks. Here's the (simplified) code:
main.c:
#include <stdio.h>
#include <stdlib.h>
#include "test.c" // include not necessary for error in Code::Blocks
int main()
{
//t = test(); // calling of method also not necessary
return 0;
}
test.c:
void test() {}
When I try to build this program, it gives the following errors:
*path*\test.c|1|multiple definition of `_ test'|
obj\Debug\main.o:*path*\test.c|1|first defined here|
There is no way that I'm multiply defining test (although I don't know where the underscore is coming from) and it seems highly unlikely that the definition is somehow included twice. This is all the code there is.
I've ruled out that this error is due to some naming conflict with other functions or files being called test or test.c. Note that the multiple and the first definition are on the same line in the same file.
Does anyone know what is causing this and what I can do about it? Thanks!
A: I had similar problem and i solved it following way.
Solve as follows:
Function prototype declarations and global variable should be in test.h file and you can not initialize global variable in header file.
Function definition and use of global variable in test.c file
if you initialize global variables in header it will have following error
multiple definition of `_ test'|
obj\Debug\main.o:path\test.c|1|first defined here|
Just declarations of global variables in Header file no initialization should work.
Hope it helps
Cheers
A: Including the implementation file (test.c) causes it to be prepended to your main.c and complied there and then again separately. So, the function test has two definitions -- one in the object code of main.c and once in that of test.c, which gives you a ODR violation. You need to create a header file containing the declaration of test and include it in main.c:
/* test.h */
#ifndef TEST_H
#define TEST_H
void test(); /* declaration */
#endif /* TEST_H */
A: If you have added test.c to your Code::Blocks project, the definition will be seen twice - once via the #include and once by the linker. You need to:
*
*remove the #include "test.c"
*create a file test.h which contains the declaration:
void test();
*include the file test.h in main.c
A: If you're using Visual Studio you could also do "#pragma once" at the top of the headerfile to achieve the same thing as the "#ifndef ..."-wrapping. Some other compilers probably support it as well ..
.. However, don't do this :D Stick with the #ifndef-wrapping to achieve cross-compiler compatibility. I just wanted to let you know that you could also do #pragma once, since you'll probably meet this statement quite a bit when reading other peoples code.
Good luck with it
A: The underscore is put there by the compiler and used by the linker. The basic path is:
main.c
test.h ---> [compiler] ---> main.o --+
|
test.c ---> [compiler] ---> test.o --+--> [linker] ---> main.exe
So, your main program should include the header file for the test module which should consist only of declarations, such as the function prototype:
void test(void);
This lets the compiler know that it exists when main.c is being compiled but the actual code is in test.c, then test.o.
It's the linking phase that joins together the two modules.
By including test.c into main.c, you're defining the test() function in main.o. Presumably, you're then linking main.o and test.o, both of which contain the function test().
A: Ages after this I found another problem that causes the same error and did not find the answer anywhere. I thought to put it here for reference to other people experiencing the same problem.
I defined a function in a header file and it kept throwing this error. (I know it is not the right way, but I thought I would quickly test it that way.)
The solution was to ONLY put a declaration in the header file and the definition in the cpp file.
The reason is that header files are not compiled, they only provide definitions.
A: You shouldn't include other source files (*.c) in .c files. I think you want to have a header (.h) file with the DECLARATION of test function, and have it's DEFINITION in a separate .c file.
The error is caused by multiple definitions of the test function (one in test.c and other in main.c)
A: You actually compile the source code of test.c twice:
*
*The first time when compiling test.c itself,
*The second time when compiling main.c which includes all the test.c source.
What you need in your main.c in order to use the test() function is a simple declaration, not its definition. This is achieved by including a test.h header file which contains something like:
void test(void);
This informs the compiler that such a function with input parameters and return type exists. What this function does ( everything inside { and } ) is left in your test.c file.
In main.c, replace #include "test.c" by #include "test.h".
A last point: with your programs being more complex, you will be faced to situations when header files may be included several times. To prevent this, header sources are sometimes enclosed by specific macro definitions, like:
#ifndef TEST_H_INCLUDED
#define TEST_H_INCLUDED
void test(void);
#endif
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/672734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "56"
}
|
Q: WPF AvalonDock LayoutChanged and LayoutChanging MVVM I am using the AvalonDock component in my MVVM WPF application. In my XAML I have something like this:
<xcad:DockingManager Name="_dockingManager" Margin="5" Grid.Row="2"
DataContext="{Binding DockingManagerViewModel}"
DocumentsSource="{Binding Documents}"
ActiveContent="{Binding Path=ActiveContent, Mode=TwoWay}"
AnchorablesSource="{Binding Anchorables}">
Now I want to react on layout changes. As shown in the XAML snippet above I have bound the DockingManager to the "DockingManagerViewModel". So I assume to handle the layout changes also in my view model. The main problem is that the docking manager offers a LayoutChanging and LayoutChanged event and I have no idea how to handle this in my view model. I guess I cannot bind these events to corresponding commands in my view model? Any idea what's the best approach to handle this?
For better understanding, what I want to achieve is the following: The user shows a "Properties" window and then drags the window from the right side to the left. After that, the user closes the "Properties" window and soon after the user decides to show the properties window again. In this case I want to bring back the window on the left side because this was the last location. So my idea was to store the last location in the view model during the layout change so that I can restore the location when the view is shown again.
A: So you want to react to some events happening in the UI. First things first: if you want, in reaction, to change only your view/layout, you do not need ICommand and a simple event handler will do the trick. If you expect to change the underlying data (your view model) in reaction to that event, you can use an ICommand or an event handler, as described below.
Let's first define a simple view model for our MainWindow:
public class MyViewModel {
/// <summary>
/// Command that performs stuff.
/// </summary>
public ICommand MyCommand { get; private set; }
public MyViewModel() {
//Create the ICommand
MyCommand = new RelayCommand(() => PerformStuff());
}
public void PerformStuff() {
//Do stuff that affects your view model and model.
//Do not do anything here that needs a reference to a view object, as this breaks MVVM.
Console.WriteLine("Stuff performed in ViewModel.");
}
}
This assumes that you have a RelayCommand implementation of the ICommand interface that lets you invoke Action delegates on Execute:
public class RelayCommand : ICommand {
//Saved Action to invoke on Execute
private Action _action;
/// <summary>
/// ICommand that always runs the passed <paramref name="action"/> when executing.
/// </summary>
/// <param name="action"></param>
public RelayCommand(Action action) {
_action = action;
}
#region ICommand
public event EventHandler CanExecuteChanged;
public bool CanExecute(object parameter) => true;
public void Execute(object parameter) => _action.Invoke();
#endregion
}
In your MainWindow.xaml, we define two Border objects: the first one operates on the view model through an event handler, the second one through the ICommand pattern:
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="*"></RowDefinition>
<RowDefinition Height="*"></RowDefinition>
</Grid.RowDefinitions>
<!--Performs action on click through event handler-->
<Border Grid.Row="0" MouseUp="Border_MouseUp" Background="Red"></Border>
<!--Performs action on click through ICommand-->
<Border Grid.Row="1" Background="Blue">
<Border.InputBindings>
<MouseBinding MouseAction="LeftClick" Command="{Binding MyCommand}"></MouseBinding>
</Border.InputBindings>
</Border>
</Grid>
In your MainWindow.xaml.cs, assign a view model object and define the event handler for mouse up event:
public partial class MainWindow : Window {
public MainWindow() {
InitializeComponent();
DataContext = new MyViewModel(); ;
}
//Handles mouse up event on the first Border
private void Border_MouseUp(object sender, System.Windows.Input.MouseButtonEventArgs e) {
//...
//Do stuff that affects only your view here!
//...
//Now the stuff that affects the view model/model:
((MyViewModel)DataContext).PerformStuff();
}
}
Clicking on any of the two Border objects will perfom stuff on your view model.
How to apply this to your specific control and event?
*
*You can always use a custom event handler DockingManager_LayoutChanged as shown above.
*If you want to use the ICommand and your event is something else than a mouse or keyboard event, you can achieve the binding by following this instead of using a MouseBinding.
A: For such scenarios I always write attached properties.
For example for the Loaded-Event of a Window I use the following attached property:
internal class WindowExtensions
{
public static readonly DependencyProperty WindowLoadedCommandProperty = DependencyProperty.RegisterAttached(
"WindowLoadedCommand", typeof(ICommand), typeof(WindowExtensions), new PropertyMetadata(default(ICommand), OnWindowLoadedCommandChanged));
private static void OnWindowLoadedCommandChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
Window window = d as Window;
if (window == null)
return;
if (e.NewValue is bool newValue)
{
if (newValue)
{
window.Loaded += WindowOnLoaded;
}
}
}
private static void WindowOnLoaded(object sender, RoutedEventArgs e)
{
Window window = sender as Window;
if (window == null)
return;
ICommand command = GetWindowLoadedCommand(window);
command.Execute(null);
}
public static void SetWindowLoadedCommand(DependencyObject element, ICommand value)
{
element.SetValue(WindowLoadedCommandProperty, value);
}
public static ICommand GetWindowLoadedCommand(DependencyObject element)
{
return (ICommand) element.GetValue(WindowLoadedCommandProperty);
}
}
In the viewmodel you have a standard command like:
private ICommand loadedCommand;
public ICommand LoadedCommand
{
get { return loadedCommand ?? (loadedCommand = new RelayCommand(Loaded)); }
}
private void Loaded(object obj)
{
// Logic here
}
And on the Window-Element in the XAML you write:
local:WindowExtensions.WindowLoadedCommand="{Binding LoadedCommand}"
local is the namespace where the WindowExtensions is located
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/57987291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to query with 2 or more SQL joins My CouchDB database has 3 types of data: A, B, C.
*
*A has a 'b' attribute being an ID to a B, and a name
*B has a 'c' attribute being an ID to a C, and a name
*C has a name
for instance:
{ _id:"a1", type:"A", name:"aaaaa", b:"b1" }
{ _id:"b1", type:"B", name:"bbbbb", c:"c1" }
{ _id:"c1", type:"C", name:"ccccc" }
I would like to get in one view query all the As, and retreiving the names of its B, and of its B's C (and for instance, I would like to restrict the result to get only the As of which C's name is "cc").
How can I acheive this?
(to get only A and B, the answer is:
map: function (doc) {
if (doc.type == "A") {
emit([doc._id,0])
emit([doc._id,1], { _id: A.b })
}
}
but I have no clue to extend to 2nd relationship)
I am also interested with the answer in the case we have a 'D' class, and 'E' class etc with more nested relationships.
Many thanks!
A: In a generic way, in CouchDB it's only possible to traverse a graph one level deep. If you need more levels, using a specialized graph database might be the better approach.
There are several ways to achieve what you want in CouchDB, but you must model your documents according to the use case.
*
*If your "C" type is mostly static, you can embed the name in the document itself. Whenever you modify a C document, just batch-update all documents referring to this C.
*In many cases it's not even necessary to have a C type document or a reference from B to C. If C is a tags document, for example, you could just store an array of strings in the B document.
*If you need C from A, you can also store a reference to C in A, best accompanied with the name of C cached in A, so you can use the cached value if C has been deleted.
*If there are only a few instances of one of the document types, you can also embed them directly. Depending on the use case, you can embed B in A, you can embed all As in an array inside of B, or you can even put everything into one document.
With CouchDB, it makes most sense to think of the frequency and distribution of document updates, instead of normalizing data.
This way of thinking is quite different from what you do with SQL databases, but in the typical read-mostly scenarios we have on the web, it's a better trade-off than expensive read queries to model documents like independent entities.
When I model a CouchDB document, I always think of it as a passport or a business letter. It's a single entity that holds valid, correct and complete information, but it's not strictly guaranteed that I am still as tall as in the passport, that I look exactly as in the picture, that I haven't changed my name, or that I have a different address than the one stated on the business letter.
If you provide more information on what you actually want to do with some examples, I will happily elaborate further!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/48126469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How can I test setResult() in an Android Espresso test? Is there any good way to test the result code and data in an Android Espresso test? I am using Espresso 2.0.
Suppose I have an Activity called BarActivity.class, which upon performing some action, calls setResult(int resultCode, Intent data) with the appropriate payload.
I'd like to write a test case to verify the resultCode and data. However, because setResult() is a final method, I can't override it.
Some options I thought about were:
*
*Define a new method like setActivityResult() and just use that so it can be intercepted, etc...
*Write a test-only TestActivity that will call startActivityForResult() on BarActivity and check the result in TestActivity.onActivityResult()
Trying to think what's lesser of the two evils, or if there's any other suggestions on how to test for this. Any suggestions? Thanks!
A: If you are willing to upgrade to 2.1, then take a look at Espresso-Intents:
Using the intending API (cousin of Mockito.when), you can provide a response for activities that are launched with startActivityForResult
This basically means it is possible to build and return any result when a specific activity is launched (in your case the BarActivity class).
Check this example here: https://google.github.io/android-testing-support-library/docs/espresso/intents/index.html#intent-stubbing
And also my answer on a somewhat similar issue (but with the contact picker activity), in which I show how to build a result and send it back to the Activity which called startActivityForResult()
A: If meanwhile you switched to the latest Espresso, version 3.0.1, you can simply use an ActivityTestRule and get the Activity result like this:
assertThat(rule.getActivityResult(), hasResultCode(Activity.RESULT_OK));
assertThat(rule.getActivityResult(), hasResultData(IntentMatchers.hasExtraWithKey(PickActivity.EXTRA_PICKED_NUMBER)));
You can find a working example here.
A: this works for me:
@Test
fun testActivityForResult(){
// Build the result to return when the activity is launched.
val resultData = Intent()
resultData.putExtra(KEY_VALUE_TO_RETURN, true)
// Set up result stubbing when an intent sent to <ActivityB> is seen.
intending(hasComponent("com.xxx.xxxty.ActivityB")) //Path of <ActivityB>
.respondWith(
Instrumentation.ActivityResult(
RESULT_OK,
resultData
)
)
// User action that results in "ActivityB" activity being launched.
onView(withId(R.id.view_id))
.perform(click())
// Assert that the data we set up above is shown.
onView(withId(R.id.another_view_id)).check(matches(matches(isDisplayed())))
}
Assuming a validation like below on onActivityResult(requestCode: Int, resultCode: Int, data: Intent?)
if (requestCode == REQUEST_CODE && resultCode == Activity.RESULT_OK) {
data?.getBooleanExtra(KEY_VALUE_TO_RETURN, false)?.let {showView ->
if (showView) {
another_view_id.visibility = View.VISIBLE
}else{
another_view_id.visibility = View.GONE
}
}
}
I follow this guide as reference : https://developer.android.com/training/testing/espresso/intents and also i had to check this links at the end of the above link https://github.com/android/testing-samples/tree/master/ui/espresso/IntentsBasicSample
and
https://github.com/android/testing-samples/tree/master/ui/espresso/IntentsAdvancedSample
A: If you're using ActivityScenario (or ActivityScenarioRule) as is the current recommendation in the Android Developers documentation (see the Test your app's activities page), the ActivityScenario class offers a getResult() method which you can assert on as follows:
@Test
fun result_code_is_set() {
val activityScenario = ActivityScenario.launch(BarActivity::class.java)
// TODO: execute some view actions which cause setResult() to be called
// TODO: execute a view action which causes the activity to be finished
val activityResult = activityScenario.result
assertEquals(expectedResultCode, activityResult.resultCode)
assertEquals(expectedResultData, activityResult.resultData)
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30083487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: Python SpiDev TypeError when trying to recieve and then send data I am trying to receive data on my Raspberry Pi from one micro-controller that is sending a byte, and then sending that data off to another micro-controller. However, I get the following error:
resp = spi.xfer(list) TypeError: Non-Int/Long value in arguments: b66deb70.
I have tried changing the list value to hex and populating the list with many more values to see if that would help, but no luck. Not a ton of info on how to receive data properly online through SPI. Does anyone know how to resolve this error by creating the list properly?
import spidev
import time
spi = spidev.SpiDev()
spi.open(0, 0)
spi.open(0, 1)
spi.max_speed_hz = 1
spi.mode = 0
count = 0
list = [0x00, 0x00]
try:
while True:
list[0] = count # update our count variable (single element in a list)
count = spi.readbytes(1) # read data being recieved from 1st microcontroller
print(count)
resp = spi.xfer(list) # send the data to the 2nd microcontroller
time.sleep(1)
except KeyboardInterrupt:
spi.close()
A: spidev checks for validity of the values in the input list via PyLong_Check, (see here), which sadly doesn't accept certain things that you might hope it would as valid values. Worse, the error message does not really tell you anything useful.
I think your issue is that with count = spi.readbytes(1), count is being set to a list, not an integer. If you change to
count = spi.readbytes(1)[0]
I would expect your code to work!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/58648381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: CSS hover only working with anchors in IE10? I have tried a few really basic css hovers which work on all browsers except ie10. My question now is what is going on? It only works on anchor tags. Is there any work-around? I tried specifying a background-color but that doesnt work.
I read a lot on stackoverflow but non of it seems to be related to my problem.
.block-active {
margin: 0px 0px 0.5% 0.5%;
height: auto;
opacity:0.7;
width: auto;
padding: 1.7% 1.8% 1.7% 1.8%;
transition:opacity 0.5s;
font-size: 210%;
position:relative;
}
.block-active:hover {
opacity:1;
}
A: I finally found out what was wrong! The answer is so retarded that I didn't belive it would work but it did.
Simply add this at the page beginning (before <html> tag):
<!DOCTYPE html>
Yep, Internet Explorer...
Otherwise the :hover works only on <a> and <button> tags.
A: Not sure if this answers your question but the code below works for me (IE10, Win7 on Virtual Machine)
Another option is to use Javascript.
<html>
<head>
<style>
.block-active {
background-color: #0f0;
margin: 0px 0px 0.5% 0.5%;
height: 50px;
opacity:0.5;
width: auto;
padding: 1.7% 1.8% 1.7% 1.8%;
transition:opacity 0.5s;
position:relative;
}
.block-active:hover {
opacity:1;
}
</style>
</head>
<body>
<div class="block-active"></div>
</body>
</htm>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/27493928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Pausing & Resume Android Repo Sync I am trying to sync the following repo
repo init -u git://github.com/SlimRoms/platform_manifest.git -b jb
The problem is i have started the repo sync around 30 hours ago & its still not complete. (i have a 1Mbps connection). I dont want to keep the laptop switched for so long now & would like to pause the current sync & resume later.
So, i searched a bit, and found out that to pause the current download/sync i could use:
*
*ctrl+C
*ctrl+Z
*just close the terminal (it will resume download next time
automatically)
So i tried using ctrl+c, the download stopped. And then to resume i tried "fg", but it doesnt start again.
The error i get is:
bash: fg: current: no such job
Can anybody help me out here? Can i just shut down & continue the sync later using:?
repo sync
A: You can't really pause a repo sync, but if you abort it using Ctrl-C and then run it again later, it will effectively pick up where it left off. Although it will start working through the project list from the beginning again, and may still fetch some new data for projects that have already been processed, it should whizz through these projects, because all of the data that it had previously fetched will still be there in the hidden .repo directory.
See this answer for an excellent description of the way that repo init and repo sync work.
Note that you won't immediately see any of the projects that have been fetched, because repo sync doesn't create and populate your working directories until it has finished cloning all of the git repositories in .repo/projects.
A: If you want to repo sync without hanging up, you can use:
nohup repo sync &
and exit the ssh/telnet/terminal session.
For the disk space increasing issue, just do the following periodically:
cd /path/to/repo
rm -f `find . -name '*tmp_pack*'`
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/12555835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Convert to date only when needed I tried converting the strings to date, which works but:
Original
Converted
10/5/1983
1983-10-5
05/27/87
1987-05-27
3/10/1970
2070-03-10
↑↑↑ This is the error. Somehow some of the 19xx got converted to 20.
I have tried: df['BirthYear'] = pd.to_datetime(df['BirthYear'], format = ' %m-%d-%y').
The error is ValueError: time data '10/7/1986' does not match format ' %m-%d-%y' (match).
A: It is probably because you are using hyphens instead of slashes which is what is in your data. Try
df['Birthyear'] = pd.to_datetime(df['Birthyear'], format='%m/%d/%y')
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/73226041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to hide column of data from highcharts I have a CSV file with this content:
Date,Value
2014-01-12,11286,3031
2014-01-13,11994,2115
How can I hide the last column (with values 3031 and 2115) from appearing in my HighChart?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/29605325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Switch views on the same tab in a tab bar WITHOUT using a navigation controller I am looking for a way to switch the current view in a tab container to another, all within the same tab and not using a navigation controller.
I have tried something like this:
FooViewController *fooViewController = [[FooViewController alloc] initWithNibName:@"FooViewController" bundle:nil];
self.view.window.rootViewController.view.window.rootViewController = fooViewController;
[fooViewController release];
And this:
FooViewController *fooViewController = [[FooViewController alloc] initWithNibName:@"FooViewController" bundle:nil];
[self.view removeFromSuperview];
[self.view addSubview:fooViewController.view];
[fooViewController release];
To no avail.
Any ideas?
A: The method I used was to create a subclass of UIViewController that I used as the root view of 3 child view controllers. Notable properties of the root controller were:
*
*viewControllers - an NSArray of view controllers that I switched between
*selectedIndex - index of the selected view controller that was set to 0 on viewLoad. This is nonatomic, so when the setSelectedIndex was called it did all the logic to put that child view controller in place.
*selectedViewController - a readonly property so that other classes could determine what was currently being shown
In the setSelectedIndex method you need to use logic similar to:
[self addChildViewController: selectedViewController];
[[self view] addSubview: [selectedViewController view]];
[[self view] setNeedsDisplay];
This worked really well, but because I wanted to use a single navigation controller for the entire application, I decided to use a different approach.
I forgot to mention you will want to clear child view controllers every time you add one, so that you don't stack up a ton of them and waste memory. Before the block above call:
for (UIViewController *viewController in [self childViewControllers])
[viewController removeFromParentViewController];
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7164819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Uncaught TypeError: Cannot read properties of undefined (reading 'window') I have a problem in my Angular 8 project, everything runs fine with ng it works, but when it runs ng build --prod the project compiles without errors and sends it to the server and then presents this error, what can be causing this?:
main-es2016.9481fe84ed9e8ee610e7.js:1 Uncaught TypeError: Cannot read properties of undefined (reading 'window')
at main-es2016.9481fe84ed9e8ee610e7.js:1
at main-es2016.9481fe84ed9e8ee610e7.js:1
at Object.<anonymous> (main-es2016.9481fe84ed9e8ee610e7.js:1)
at Object.uki+ (main-es2016.9481fe84ed9e8ee610e7.js:1)
at i (runtime-es2016.f781f2a4125c40732329.js:1)
at Module.zUnb (main-es2016.9481fe84ed9e8ee610e7.js:1)
at i (runtime-es2016.f781f2a4125c40732329.js:1)
at Object.0 (main-es2016.9481fe84ed9e8ee610e7.js:1)
at i (runtime-es2016.f781f2a4125c40732329.js:1)
at t (runtime-es2016.f781f2a4125c40732329.js:1)
A: I don't know how, but I deleted the node_module folder and the package-lock.json file and ran npm install and everything started working again. Thank you all.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/69910136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: WCF tracing of ONLY failed requests? I want to save trace information into .svclog files but only for failed requests. Is this possible? If so, how precisely?
I have a WCF service that's called hundreds of times per minute. On rare occasions clients will get an error 500 that occurs outside of the boundaries of my code running inside WCF (usually security issues). I'd like to know exactly why those errors are happening and what's causing them.
I would also really like to use the Trace Viewer tool to examine the .svclog files.
As far as I can tell, I have two options:
1) instrument FERB tracing by logging failed requests via system.webServer\tracing settings. Unfortunately, I really don't like the interface of the IE trace-viewer, nor do I get enough information from the trace-logs to figure out why an error outside of my code has occurred.
2) turn on the global tracing under system.diagnostics\trace section. This section produces great trace-logs with everything captured that I could ever want. However, I cannot find a way to only capture the information for failed requests. This section captures trace information for ALL requests. My trace logs quickly fill up!
My Errors 500 are intermittent and rare. Ultimately, I want to always have my .svclog tracing ON but only have it kick in when failed requests occur.
Please advice if this is possible?
Thank you!
Edit:
Graham,
I've followed your advice and I'm not seeing the logs I expect. Here are relevant sections from the web.config:
<system.diagnostics>
<trace>
<listeners>
<add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">
<filter type="" />
</add>
</listeners>
</trace>
<sources>
<source name="System.ServiceModel" switchValue="Error">
<listeners>
<add name="wcfTracing"
type="System.Diagnostics.XmlWriterTraceListener"
initializeData="Traces1.svclog"/>
<add name="log4netTracing"
type="AzureWatch.Model.Service.Log4netTraceListener,AzureWatch.Model.Service"/>
</listeners>
</source>
<source name="System.ServiceModel.MessageLogging" switchValue="Error">
<listeners>
<add name="wcfTracing"
type="System.Diagnostics.XmlWriterTraceListener"
initializeData="Traces2.svclog"/>
<!--<add name="log4netTracing"
type="AzureWatch.Model.Service.Log4netTraceListener,AzureWatch.Model.Service"/>-->
</listeners>
</source>
</sources>
</system.diagnostics>
<!-- ... -->
<diagnostics wmiProviderEnabled="true">
<messageLogging
logEntireMessage="true"
logMalformedMessages="true"
logMessagesAtServiceLevel="true"
logMessagesAtTransportLevel="true"
maxSizeOfMessageToLog="1000000"
maxMessagesToLog="-1" />
</diagnostics>
Here is the WCF's client error:
<Exception>
<Type>System.Net.Sockets.SocketException</Type>
<Message>An existing connection was forcibly closed by the remote host</Message>
<StackTrace>
<Frame>at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)</Frame>
</StackTrace>
</Exception>
Unfortunately there is NOTHING that's logged by either of the trace-listeners.
Failed Request log contains this:
-GENERAL_READ_ENTITY_END
BytesReceived 0
ErrorCode 2147943395
ErrorCode The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)
Warning
-MODULE_SET_RESPONSE_ERROR_STATUS
ModuleName ManagedPipelineHandler
Notification 128
HttpStatus 400
HttpReason Bad Request
HttpSubStatus 0
ErrorCode 0
ConfigExceptionInfo
Notification EXECUTE_REQUEST_HANDLER
ErrorCode The operation completed successfully. (0x0)
0 msInformational
A: I've tried putting in the following config for my WCF service, and hit the service with valid and invalid credentials. Only the requests with invalid credentials caused anything to appear in the service trace file. My service uses a custom UserNamePasswordValidator class, and this was present in the stack trace. The important parts are the switchValue="Error" and propagateActivity="false" in the <source> element. Not sure if this is exactly what you want, but it at least seems close...
<system.diagnostics>
<sources>
<source name="System.ServiceModel" switchValue="Error"
propagateActivity="false">
<listeners>
<add type="System.Diagnostics.DefaultTraceListener" name="Default">
<filter type="" />
</add>
<add name="ServiceModelTraceListener">
<filter type="" />
</add>
</listeners>
</source>
</sources>
<sharedListeners>
<add initializeData="C:\Path-to-log-file\Web_tracelog.svclog"
type="System.Diagnostics.XmlWriterTraceListener, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
name="ServiceModelTraceListener"
traceOutputOptions="DateTime, Timestamp, Callstack">
<filter type="" />
</add>
</sharedListeners>
<trace autoflush="true" />
</system.diagnostics>
A: Alternatively it's possible to specify EventTypeFilter as listener's filter
<listeners>
<add name="console"
type="System.Diagnostics.ConsoleTraceListener" >
<filter type="System.Diagnostics.EventTypeFilter"
initializeData="Error" />
</add>
</listeners>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/4222023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How to communicate processes of a tree without external mechanisms I need to build a process tree in C using fork() with the following shape:
I have to send signals between them so I would also like to know if there is any way to store the PIDs of the processes in an array or something else, so each process has the PIDs of the others. The issue is I have some restrictions like not using pipes, files, or other external mechanisms to share data between processes. Sleep and exec can't be used neither.
This is how I have to send signals between them:
A:
So I just have to create them in a certain order so at the moment they are created, the PID of the process they will send a signal is already created?
Right - in particular H4 has to be forked before H3/N3, so that H4 is known to N3. Demo:
#include <signal.h>
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
void handler(int signum, siginfo_t *si, void *u)
{
printf("%d received signal from %d %s\n", getpid(), si->si_pid,
si->si_value.sival_ptr);
}
main()
{
// for demo, defer signal delivery until process unmasks the signal
sigset_t set, oldset;
sigemptyset(&set);
sigaddset(&set, SIGRTMIN);
sigprocmask(SIG_BLOCK, &set, &oldset);
sigaction(SIGRTMIN, &(struct sigaction){ .sa_sigaction = handler,
.sa_flags = SA_SIGINFO }, NULL);
pid_t P = getpid();
pid_t H1 = fork(); if (H1 < 0) perror("H1"), exit(1);
if (H1 == 0)
{
// use sigqueue() instead of kill(), so can pass sender ID
sigqueue(P, SIGRTMIN, (union sigval){.sival_ptr = "H1"});
sigsuspend(&oldset);
exit(0);
}
pid_t H2 = fork(); if (H2 < 0) perror("H2"), exit(1);
if (H2 == 0)
{
pid_t N2 = fork(); if (N2 < 0) perror("N2"), exit(1);
if (N2 == 0)
{
sigqueue(H1, SIGRTMIN, (union sigval){.sival_ptr = "N2"});
sigsuspend(&oldset);
exit(0);
}
sigqueue(N2, SIGRTMIN, (union sigval){.sival_ptr = "H2"});
sigsuspend(&oldset);
exit(0);
}
sigqueue(H2, SIGRTMIN, (union sigval){.sival_ptr = "P"});
pid_t H4 = fork(); if (H4 < 0) perror("H4"), exit(1);
if (H4 == 0)
{
sigqueue(P, SIGRTMIN, (union sigval){.sival_ptr = "H4"});
sigsuspend(&oldset);
exit(0);
}
pid_t H3 = fork(); if (H3 < 0) perror("H3"), exit(1);
if (H3 == 0)
{
pid_t N3 = fork(); if (N3 < 0) perror("N3"), exit(1);
if (N3 == 0)
{
sigqueue(H4, SIGRTMIN, (union sigval){.sival_ptr = "N3"});
sigsuspend(&oldset);
exit(0);
}
sigqueue(N3, SIGRTMIN, (union sigval){.sival_ptr = "H3"});
sigsuspend(&oldset);
exit(0);
}
sigqueue(H3, SIGRTMIN, (union sigval){.sival_ptr = "P"});
sigprocmask(SIG_UNBLOCK, &set, NULL);
do ; while (wait(NULL) > 0 || errno != ECHILD);
}
Example output:
1074 received signal from 1072 P
1072 received signal from 1073 H1
1072 received signal from 1076 H4
1075 received signal from 1074 H2
1073 received signal from 1075 N2
1077 received signal from 1072 P
1076 received signal from 1078 N3
1078 received signal from 1077 H3
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/41244286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Sending String From JavaScript to Servlet I know this is a heavily asked question, but no matter how much I google it, I still can't seem to work.
I simply want to send several parameters to a servlet, "/gamesave".
The way I'm currently trying to do this is by clicking a button which calls this saveFunction() and then another button to actually redirect to the servlet.
It's pretty ghetto, but just trying to test this functionality.
Can anyone help me get this working? Do I need to import anything special to use the jquery?
var selection = "Happy";
$.ajax({
url: '/gamesave',
data: { stringParameter: selection },
type: 'POST'
});
A: Try this:
*
*Open your web console.
*Read the message in console tab, the last one.
*If there is a message saying that $ is not defined, then add jQuery to your HTML file:
<script src="http://code.jquery.com/jquery-2.1.0.min.js"></script>
That's downloading jQuery from the internet when you load your page, or you can download that and put it with a local path:
<script src="/jquery-2.1.0.min.js"></script>
or
<script src="some/directory/jquery-2.1.0.min.js"></script>
If you need to understand what to put in src check this.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/23465061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Rails Where Clause PG::SyntaxError: ERROR: subquery must return only one column I want to take advantage of rails scoping with a custom sql query. You can click this post for I'm trying to do - especially in ActiveAdmin.
How to use a find_by_sql query for an ActiveAdmin index page?
Basically I'm trying to do something like this
class YourModel < ActiveRecord::Base
scope :my_scope, where('some custom SQL')
scope :my_other_scope, where('some other custom SQL')
end
This is my query I tried out first in Rails C
LesleyGrade.where('select distinct STC_TERM_GPA,
TERM,
last,
first
from lesley_grades
order by first, term ASC')
Essentially it's a subquery in a where clause. I only want to return a distinct STC_TERM_GPA for per person per row.
These are my attributes in a lesley_grades table, which might help in understand the information I'm trying to retrieve.
id
user_id
lesley_id
last
first
active
site
cohort
section
sections_title
faculty
completed_term_cred
term
sec_start_date
sec_end_date
grade
stc_cred
active_program
most_recent_program
intent_filed
stc_term_gpa
sta_cum_gpa
start_term
prog_status
last_change_date
created_at
updated_at
So I wanted to see if this would work in Rails C first and this is what I got.
LesleyGrade Load (51.1ms) SELECT "lesley_grades".* FROM "lesley_grades" WHERE (select distinct STC_TERM_GPA,
TERM,
last,
first
from lesley_grades
order by first, term ASC)
PG::SyntaxError: ERROR: subquery must return only one column
LINE 1: ...ECT "lesley_grades".* FROM "lesley_grades" WHERE (select di...
^
: SELECT "lesley_grades".* FROM "lesley_grades" WHERE (select distinct STC_TERM_GPA,
TERM,
last,
first
from lesley_grades
order by first, term ASC)
=> #<LesleyGrade::ActiveRecord_Relation:0x3fcb44d60944>
I'm not sure how to fix
PG::SyntaxError: ERROR: subquery must return only one column
A: Not exactly sure what you're trying to acheive, but something like this should at least resolve the postgres error you're seeing:
LesleyGrade.where('STC_TERM_GPA IN
(SELECT STC_TERM_GPA FROM
(SELECT DISTINCT STC_TERM_GPA, TERM, last, first
FROM lesley_grades
order by first, term ASC) AS results
)'
)
There's a few levels there:
The deepest level says "get me all rows which have a distinct / unique combination of the columns STC_TERM_GPA, TERM, last, and first
The next layer says "Of those results, get rid of the other columns and just give me the STC_TERM_GPA column.
The outer-most layer says "Give me all rows where the STC_TERM_GPA value is in the set of STC_TERM_GPA we just selected.
EDIT: it sounds like you don't what a WHERE clause at all. You want something like:
LesleyGrade.select('STC_TERM_GPA, TERM, last, first').group('STC_TERM_GPA').order('first, term ASC')
That's untested. But to select multiple columns, but restrict to distinct values of one column, you'll want to use SQL's GROUP BY clause, which is available in Rails' ActiveRecord through the group method.
BTW, select distinct STC_TERM_GPA, TERM, last, first from lesley_grades order by first, term ASC will give you results which are unique on the COMBINATION of STC_TERM_GPA, TERM, last, first, not on JUST STC_TERM_GPA. I've assumed that while you want all those columns, you only want DISTINCT on the STC_TERM_GPA.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30065709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: generate data from rails console Let's say I have User(name, password, email, zip code) table and I want to generate User objects with random data.
Is there a command( to use on rails console), which does that? something like User.generate.
A: After adding the Faker gem to your gemfile, add this to your user.rb file.
def self.generate_new
name = Faker::Name.name
password = "foobar"
email = Faker::Internet.email
zip = Faker::Address.zip
User.create!(name: name,
passowrd: password,
email: email,
zip: zip)
end
After restarting your console, the command User.generate_new should run this command and generate a new user with random inputs.
A: Take a look at this gem: https://github.com/stympy/faker
It creates fake data for objects.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31252052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Struggling with 2 methods writing this java class in NetBeans IDE 8.0 - probably quite basic I'm trying to create a class that runs this input file:
package tester.test5;
/**
* @author cf0rd
*/
public class RunAddress {
public static void main(String[] args) {
Person p1 = new Person("Teri", "Politician");
Person p2 = new Person("Matt", "Teacher");
Person p3 = new Person("Ruby", "Electrician");
Person p4 = new Person("Jon", "Archivist");
Address a1 = new Address(56, "BS22 1YY");
Address a2 = new Address(101, "ZA10 9XX");
a1.setNumber(a1.getNumber() + 30);
a1.addPerson(p1);
a1.addPerson(p2);
a1.addPerson(p3);
a2.addPerson(p4);
p1.setJob("Wheel Tapper");
System.out.print("Address: " + a1);
for (Person p : a1.getPeople()) {
System.out.printf(": %s", p);
}
System.out.println("");
System.out.print("Address: " + a2);
for (Person p : a2.getPeople()) {
System.out.printf(": %s", p);
}
System.out.println("");
p3.setName("Maz");
System.out.printf("P3 name is %s and job is %s\n", p3.getName(), p3.getJob());
} //main
} //class
and outputs something like this:
Address: 86, BS22 1YY(3): Teri(Wheel Tapper): Matt(Teacher): Ruby(Electrician)
Key: Address: [House no], [Postcode]([Number of people]): Person 1
The class is based on this UML:
Address
====================
- Number : int
- Postcode : String
- People : int
====================
+ addPerson
+ getPeople
+ toString() : String
This is the class so far, most seems OK but what I'm struggling with are the methods addPerson (which adds a person to an address) and getPeople (which returns the list of people) - The bottom 2 methods.
/**
*
* @author cf0rd
*/
public class Address {
private int Number;
private String Postcode;
private int People;
public int getNumber() {
return Number;
}
public void setNumber(int Number) {
this.Number = Number;
}
public String getPostcode() {
return Postcode;
}
public void setPostcode(String Postcode) {
this.Postcode = Postcode;
}
public String Address;
public String Address(){
return "Address{" + "Number=" + Number + ", Postcode=" + Postcode + ", People=" + People + '}';
}
public Array getPeople();
Array[] Person = {p1, p2, p3, p4}
return Person;
}
public String addPerson(Address);
this.Person = Person;
}
}
Sorry this is so long-winded, as you can probably tell I'm quite new to this and appreciate any help!
Thanks!
Edit: the second class (person) - forgot to post it!
public class Person {
private String Name;
private String Job;
public String getJob() {
return Job;
}
public void setJob(String Job) {
this.Job = Job;
}
public String getName() {
return Name;
}
public void setName(String Name) {
this.Name = Name;
}
public String Person;
public String Person() {
return "Person{" + "Name=" + Name + ", Job=" + Job + '}';
}
}
A: ArrayList<Person> people = new ArrayList<Person>();
public ArrayList<Person> getPeople(){
return people;
}
public void addPerson(Person p){
people.add(p);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/29103935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Dynamic name variable on django template I'm having a problem with django. I have a dict with all my site texts for translations. For example:
term = {"level_1": "Noob",
"level_2": "Noob 2"}
The problem is, how can I access this key on django template?
I have
img src="/images/level_{{player.level.id}}.jpg"
title="{{term.level??????? }}"
I tried:
title="{{term.level{{player.level.id}}}}
but of course this didn't work.
A: Django's template language is (by design) pretty dumb/restricted. In his comment, Davind Wolever points at Accessing a dict by variable in Django templates?, where an answer suggests to make a custom template tag.
I think that in your case, it is best to handle it in your view code. Instead of only passing along a player into your context, pass both the level ID and the level name.
Possibly you can even directly pass the image url and the level name? Not constructing the URL in your template makes it more readable.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/13387885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Can't apply background image in css (flask template) I want to apply background image in internal css. I'm using flask template. this is my directory
src - templates - image - img.jpg
src- templates - upload.html
I want to apply the background image at upload.html. but I got 404 error. can someone help me please?
my code:
background-image: url("/image/img.jpg");
error message:
get /image/img.jpg HTTP/1.1 404
A: I think you are giving wrong path and that's why you are getting 404 error.You can use following code.
background-image: url{"image/img.jpg"};
A: There could be multiple issues like :
*
*Incorrect directory-URL: Try specifying the whole URL - instead of "/image/img.jpg" try "/templates/image/img.jpg".
*Misspelled directory-name - check it properly.
*Reassure yourself that the specified image is in the correct format (.jpg).
A: The (Flask) standard way of doing this, which you'll see done in examples and tutorials, is to put static files in a static folder at the root of the project. Assuming your CSS is also in static,
background-image: url('/static/img.jpg');
does what you want. But if you're embedding styles in templates,
background-image: url({{ url_for('static', filename='img.jpg') }});
is preferred, though the former will work just fine.
If things get more complicated and you start using blueprints, consult the documentation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/56326136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: how to run jquery function only on one element using class name. there is more element using same class I have following code in my gallery page to show the galleries I have.
<div id="gallery">
<?php
if (!empty($gallery)) {
foreach ($gallery as $row) {
?>
<div class="album_holders">
<div class="image_holder">
<img src="images/gallery/<?php echo $row['img_name']; ?>" width="150" height="75"/>
</div>
<div class="title_holder">
<?php echo $row['title']; ?>
</div>
</div>
</div>
<?php
}
}
?>
</div>
so I need to add mouse over animation to all album_holders. when the mouse come over a album I need to show album title under it using jquery function.
this is my jquery code
$(".image_holder").on("mouseover", function ()
{
$(".title_holder").animate({
"opacity": 1
},1500,$easing2);
});
but the problem is when I move mouse over one album every album is showing it's name. how can stop this. I search the inter net and found I must use .on function. either i'm using it still every thing showing its name.
A: You can get the relative .album_holders using this, and then find the .title_holder within it.
$(".image_holder").on("mouseover", function ()
{
$(this).closest('.album_holders').find('.title_holder').animate({
"opacity": 1
},1500,$easing2);
});
A: you can do this
$(".image_holder").on("mouseover", function ()
{
$(this).next(".title_holder").animate({
"opacity": 1
},1500,$easing2);
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/23324544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Displaying an image based on a variable from a PLC (programmable logic controller) I am trying to display an image based on variables from a PLC (programmable logic controller). I want to retrieve the variable from the PLC then get the images from my computer because the PLC doesnt have enough memory to store the images.
Here is my HTML code that is put into the PLC:
<script src="http://172.16.0.10:8080//PLCdemo.js" type="text/javascript">
window.onload=function(){
document.getElementById("demo").src = cars[:="variable":];
}
</script>
<img height="800" width="1200" id="demo"></img>
The :="variable": tag is the variable from the PLC
Here is my external javascript file (PLCdemo.js) on my computer with images:
<script type="text/javascript">
var cars = [
"transmission.jpg",
"High-tensile-steel-plates.jpg",
"image_306.jpg"
];
</script>
A: You should declare cars like this:
var cars = {
"1": "transmission.jpg",
"2": "High-tensile-steel-plates.jpg",
"3": "image_306.jpg"
};
I take it that :="variable": is a pre-processor directive that will get replaced with the value of the PLC variable named variable.
Calling cars[:="variable":] will then use the value of variable as a key for the associative array. When variable has the value of 1 then cars[:="variable":] will return transmission.jpg.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/24250175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Running Laravel 5.1 Task Scheduler in Plesk I have uploaded my laravel 5.1 project on Plesk server.
I wanted to run task scheduler in Plesk , i have seen many answers on internet to do so but nothing seems to be working for me.
My Plesk Task Scheduling Interface
I am running schedule:run command like this
php /var/www/vhosts/websitename.com/httpdocs/artisan schedule:run 1
and in cron style i am adding this
* * * * *
so that my cron runs every minute
When I click on run now button I get error
$kernel = $app->make(Illuminate\Contracts\Console\Kernel::class);
I search on internet and i found many solutions saying that it's PHP version issue , it will through error if PHP version is less or equal to PHP v5.4, but my current php version is 5.6.30
I am unable to figure it out what's the exact problem .
Help is appreciated
Note: I haven't added any code yet in Kernel.php file
A: This way of using command works for me fine
/opt/plesk/php/5.6/bin/php /var/www/vhosts/websitename.com/httpdocs/artisan schedule:run
This is working properly in Plesk
A: Instead of 'php' try to use command '/opt/plesk/php/5.6/bin/php'
A: it is a quite old question but for google visitors, Here a solution with Plesk in 2022
A: Try
/opt/plesk/php/7.3.14/bin/php httpdocs/artisan
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/44014239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Excel-like behaviour of Grids in Ext JS I'm trying to figure out a way to have an Excel-like behavior with the Grids on Ext JS.
Here is the sample grid I am working with. So far we can already naviguate through the cells with the arrows but only in edit mode.
However what I am trying to reach is the naviguation with the arrows, TAB and Enter keys outside of the edit mode, just like excel.
I tried to integrate this piece of code which overrides the Editor class, hoping that it would change the behavior of the cells but it doesn't change a thing.
I believe this is the most important part that overrides the Editor class and tries to include the keys input :
Ext.override(Ext.Editor, {
startEdit: function (el, value) {
var me = this,
field = me.field;
me.completeEdit();
me.boundEl = Ext.get(el);
value = Ext.isDefined(value) ? value : me.boundEl.dom.innerHTML;
if (!me.rendered) {
me.render(me.parentEl || document.body);
}
if (me.fireEvent('beforestartedit', me, me.boundEl, value) !== false) {
me.startValue = value;
me.show();
field.reset();
if (deleteGridCellValue) {
field.setValue('');
me.editing = true;
me.completeEdit();
deleteGridCellValue = false; // reset global variable
}
else {
if (newGridCellValue == '') {
// default behaviour of Ext.Editor (see source if needed)
field.setValue(value);
}
else {
// custom behaviour to handle an alphanumeric key press from non-edit mode
field.setRawValue(newGridCellValue);
newGridCellValue = ''; // reset global variable
if (field instanceof Ext.form.field.ComboBox) {
// force the combo box's filtered dropdown list to be displayed (some browsers need this)
field.doQueryTask.delay(field.queryDelay);
}
}
me.realign(true);
field.focus(false, 10);
if (field.autoSize) {
field.autoSize();
}
me.editing = true;
}
}
}
});
This is the first time that I am working on a project that is outside of Comp-Sci classes so any help would be very much appreciated. Thanks !
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/38187456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Query Sql to return value I want to write Query to display value in MessageBox , but it is not true :
SqlDataReader myReader = null;
SqlCommand myCommand = new SqlCommand("select BillNumber from BillData", cn);
cn.Open();
myReader = myCommand.ExecuteReader();
MessageBox.Show(myReader.ToString());
cn.Close();
A: You would need to do this:
myReader.GetString(0);
However, there is a bit more that needs done here. You need to leverage the ADO.NET objects properly:
var sql = "select BillNumber from BillData";
using (SqlConnection cn = new SqlConnection(cString))
using (SqlCommand cmd = new SqlCommand(sql, cn))
using (SqlDataReader rdr = cmd.ExecuteReader())
{
rdr.Read();
MessageBox.Show(rdr.GetString(0));
}
A: SqlDataReader myReader = null;
SqlCommand myCommand = new SqlCommand("select BillNumber from BillData", cn);
cn.Open();
myReader = myCommand.ExecuteReader();
myReader.Read();
MessageBox.Show(myReader["BillNumber"].ToString());
cn.Close();
A: When you just want one return value, you can use ExecuteScalar() like this:
SqlCommand myCommand = new SqlCommand("select BillNumber from BillData", cn);
cn.Open();
string return_value = myCommand.ExecuteScalar().ToString();
MessageBox.Show(return_value);
cn.Close();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/20073793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: .htaccess: Rewrite dynamic url with static url I want to enter http://localhost:81/admin/dashboard in my browser but the request should be http://localhost:81/admin/index.php?page=dashboard.
The mod_rewrite is enabled and i tried this in the .htaccess but it didn't work. The .htaccess is located in htdocs/admin/.htaccess:
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^([^/]+)/$ index.php?page=$1 [NC]
A: You can match the trailing slash optionally by adding a ? next to it in the pattern :
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^([^/]+)/?$ index.php?page=$1 [NC]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/36190029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Check stored procedure, if it is valid I have problem in my program and I didn't find error in my code. I think it's because StopProc. Can anybody check my stored procedure to see if it is valid?
ALTER PROCEDURE dbo.Update_FeatureUsers
(
@FeatureID int,
@UserID nvarchar (MAX),
@CreatedUserID int
)
AS
SET NOCOUNT ON
Declare @ParmDefinition nvarchar (400)
Declare @Query nvarchar(MAX)
SET @Query = N''
SET @Query = @Query + N'DELETE FeatureUsers '
SET @Query = @Query + N'FROM FeatureUsers '
SET @Query = @Query + N'WHERE FeatureUsers.FeatureID = @FeatureID '
SET @Query = @Query + N' AND FeatureUsers.UserID NOT IN ('+ @UserID +')'
EXECUTE sp_executesql @Query
SET @Query = N''
SET @Query = @Query + N'INSERT INTO FeatureUsers (FeatureID, UserID, CreatedUserID) '
SET @Query = @Query + N'SELECT @FeatureID, Usager.UserID, @CreatedUserID '
SET @Query = @Query + N'FROM Usager WITH (NOLOCK) '
SET @Query = @Query + N'WHERE Usager.UserID IN ('+ @UserID +') '
SET @Query = @Query + N' AND Usager.UserID NOT IN '
SET @Query = @Query + N' ( '
SET @Query = @Query + N' SELECT FeatureUsers.UserID '
SET @Query = @Query + N' FROM FeatureUsers WITH (NOLOCK) '
SET @Query = @Query + N' WHERE FeatureUsers.UserID IN ('+ @UserID +') '
SET @Query = @Query + N' AND FeatureUsers.FeatureID = @FeatureID '
SET @Query = @Query + N' ) '
SET @ParmDefinition = N'@FeatureID int '
EXECUTE sp_executesql @Query, @ParmDefinition, @FeatureID = @FeatureID
SET NOCOUNT OFF
RETURN
A: Your first dynamic SQL query also wants to access @FeatureID, but you're not passing it.
So move:
SET @ParmDefinition = N'@FeatureID int '
Up to the top of the proc and then call
EXECUTE sp_executesql @Query,@ParmDefinition,@FeatureID = @FeatureID
for both pieces of dynamic SQL.
For the general strategy - it would be far better if you made the stored proc accept a table-valued parameter for @Users and then you wouldn't need to use dynamic SQL at all.
Actually, on second reading, your second query also references @CreatedUserID, so you'll need to pass that across as a parameter to the second query. So you need to change the parameter definition between the two, or just add it to the parameters and pass it (pointlessly) to the first query.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/14607599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: How to start ActiveMQ when tomcat starts? How do I configure my J2EE application so that I can invoke ActiveMQ service along with tomcat server? I am aware about embedded broker, here asking how to start the ActiveMQ whenever I start tomcat
Current Code (works fine) :
Now I want to remove main() method and use the code to run when tomcat runs.
public class JMSService {
public void produceJMS() throws NamingException, JMSException {
ConnectionFactory connFactory = new ActiveMQConnectionFactory(ActiveMQConnection.DEFAULT_BROKER_URL);
Connection conn = connFactory.createConnection();
conn.start();
Session session = conn.createSession(false,Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createQueue("testQueue");
MessageProducer producer = session.createProducer(destination);
producer.setDeliveryMode(DeliveryMode.PERSISTENT);
TextMessage message = session.createTextMessage("Test Message ");
// send the message
producer.send(message);
System.out.println("sent: " + message);
}}
Here is my consumer :
public class JMSReceiver implements MessageListener,ExceptionListener {
public static void main(String args[]) throws Exception {
JMSReceiver re = new JMSReceiver();
re.receiveJMS();
}
public void receiveJMS() throws NamingException, JMSException {
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(ActiveMQConnection.DEFAULT_BROKER_URL);
Connection connection = connectionFactory.createConnection();
connection.start();
Session session = connection.createSession(false,Session.AUTO_ACKNOWLEDGE);
// Getting the queue 'testQueue'
Destination destination = session.createQueue("testQueue");
MessageConsumer consumer = session.createConsumer(destination);
// set an asynchronous message listener
JMSReceiver asyncReceiver = new JMSReceiver();
consumer.setMessageListener(asyncReceiver);
connection.setExceptionListener(asyncReceiver);
}
@Override
public void onMessage(Message message) {
System.out.println("Received message : " +message);
}
}
A: You aren't giving the consumer application any time to actually receive a message, you create it, then you close it. You either need to use a timed receive call to do an sync receive of the message from the Queue or you need to add some sort of wait in the main method such as a CountDownLatch etc to allow the async onMessage call to trigger shutdown once processing of the message is complete.
A: What @Tim Bish said is correct. You either need to have a timer say for example receiver should listen for 1 hour- or make it available until program terminate. Either case you need to start your consumer program once:
Change your receiveJMS method as follows:
public void receiveJMS() throws NamingException, JMSException {
try{
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(ActiveMQConnection.DEFAULT_BROKER_URL);
Connection connection = connectionFactory.createConnection();
connection.start(); // it's the start point
Session session = connection.createSession(false,Session.AUTO_ACKNOWLEDGE);
// Getting the queue 'testQueue'
Destination destination = session.createQueue("testQueue");
MessageConsumer consumer = session.createConsumer(destination);
// set an asynchronous message listener
// JMSReceiver asyncReceiver = new JMSReceiver();
//no need to create another object
consumer.setMessageListener(this);
connection.setExceptionListener(this);
// connection.close(); once this is closed consumer no longer active
Thread.sleep(60 *60 * 1000); // receive messages for 1 hour
}finally{
connection.close();// after 1 hour close it
}
}
The above program will listen upto 1 hour. If you want it as long as the program run, remove the finally block. But the recommended way is to close it somehow. since your application seems to be standalone ,you can check the java runtime shutdown hook, where you can specify how to release such resources while program terminates.
If your consumer is a web application you can close it in a ServletContextlistner.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/32945265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: cassandra - not able to insert decimal numbers using ruby client in a composite column I have a composite column family with the following schema
CREATE TABLE employees (
name varchar,
month int,
date int,
salary decimal,
PRIMARY KEY(name,month,date)
);
This works fine when executed from CQL.
#INSERT INTO employees (name,month,date,salary) VALUES ('joe',1,1,1000);
However this is failing when tried to insert using Ruby Clinet.
client = Cassandra.new('keyspace_name', '127.0.0.1:9160')
colkey = Cassandra::Composite.new([4].pack('N'), [5].pack('N'), 'salary')
client.insert(:employees, 'nick', {colkey=> [2000].pack("D")}) - This is failing
client.insert(:employees, 'nick', {colkey=> "2000"}) - This works fine
Here is the error:
CassandraThrift::InvalidRequestException: CassandraThrift::InvalidRequestException
from /usr/local/lib/ruby/gems/1.9.1/gems/cassandra-0.16.0/vendor/0.8/gen-rb/cassandra.rb:252:in `recv_batch_mutate'
from /usr/local/lib/ruby/gems/1.9.1/gems/cassandra-0.16.0/vendor/0.8/gen-rb/cassandra.rb:243:in `batch_mutate'
from /usr/local/lib/ruby/gems/1.9.1/gems/thrift_client-0.8.2/lib/thrift_client/abstract_thrift_client.rb:148:in `handled_proxy'
from /usr/local/lib/ruby/gems/1.9.1/gems/thrift_client-0.8.2/lib/thrift_client/abstract_thrift_client.rb:51:in `batch_mutate'
from /usr/local/lib/ruby/gems/1.9.1/gems/cassandra-0.16.0/lib/cassandra/protocol.rb:7:in `_mutate'
from /usr/local/lib/ruby/gems/1.9.1/gems/cassandra-0.16.0/lib/cassandra/cassandra.rb:463:in `insert'
from (irb):41
from /usr/local/lib/ruby/gems/1.9.1/gems/railties-3.2.8/lib/rails/commands/console.rb:47:in `start'
from /usr/local/lib/ruby/gems/1.9.1/gems/railties-3.2.8/lib/rails/commands/console.rb:8:in `start'
from /usr/local/lib/ruby/gems/1.9.1/gems/railties-3.2.8/lib/rails/commands.rb:41:in `<top (required)>'
from script/rails:6:in `require'
from script/rails:6:in `<main>'
What is it that I am doing wrong ? or is it a bug ?
regards,
madhu
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/12949217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: AWS PHP SDK ContentType not being set When uploading to S3 via PHP SDK even after setting the correct ContentType S3 still shows application/octet-stream
$file = Aws::putObject([
'Bucket' => 'types-jpeg',
'Key' => 'current.jpg',
'SourceFile' => 'uploads/current.jpg',
'ContentType' => 'image/jpeg',
'ACL' => 'public-read',
]);
File successfully uploads except that viewing the file's meta data shows application/octet-stream instead of 'image/jpeg' on its Content-Type
Any ideas?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/53147886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: sas: when category variable has more than three levels, can I just select two levels data to run two sample t-test As the title, I have a category variable including A,B,C three levels. I want to just select two levels such as A and B, as I know C is none of the business to run two sample t-test.
proc ttest data=ABC plots(shownull)=interval;
class var3 ###please add your code here###;
var var23;
title ' two samples t-test A&B';
run;
A: You can always filter your dataset to include only two levels.
Where var3 ne 'C';
Usually you would use an ANOVA instead when you have 3 levels and then you could do pairwise comparisons but you need to correct for multiple testing. PROC ANOVA incorporates options for this type of analysis.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/40733739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Custom highlight UIView I custom View extend UIView
@interface AttachmentView : UIView
@property (nonatomic, getter=isHighlighted) BOOL highlighted;
@property (weak, nonatomic) IBOutlet UIImageView *imageFileType;
@property (weak, nonatomic) IBOutlet UILabel *lbName;
@property (weak, nonatomic) IBOutlet UILabel *lbSize;
- (void)initComponent;
@end
i override method
- (void)setHighlighted:(BOOL)highlighted {
if(highlighted) {
} else {
}
}
But when i touch on view, method setHighlighted not call, how to fix it?
A: UIView don't like "UILabel and UIImageView" which have highlighted state.
You should do it by yourself in - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
self.highlighted = !self.highlighted ;
}
Or add UITapGestureRecognizer or UILongPressGestureRecognizer(if you want detect long pressed gesture) to the view.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/21035443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Adding input nodes on an intermediate layer I am trying to implement a ConvNet binary classifier on a set of images (2D arrays). I realize that the first convolutional layers are essential for feature extraction. I, however, have additional input parameters which could help in classification. The idea is to append additional nodes to the first fully connected layer so that I may use a feed-forward neural network for the eventual classification. Is this in anyway possible with the keras API? I would also like to know if there is a way of pulling the output of intermediate layer through the Sequential model architecture. The following code defines the model:
new = Sequential([
Conv2D(8, [3,3], activation='relu', padding ='same'),
MaxPool2D([2,2], 2, padding='valid'),
Conv2D(16, [3,3], activation='relu', padding='same'),
MaxPool2D([2,2], 2, padding='valid'),
Flatten(),
Dense(256),
Dense(64),
Dense(1, activation='sigmoid')
])
A:
I realize that the first convolutional layers are essential for feature extraction. I, however, have additional input parameters which could help in classification. The idea is to append additional nodes to the first fully connected layer so that I may use a feed-forward neural network for the eventual classification. Is this in anyway possible with the keras API?
Yes it's possible, please refer sample code with an intermediate input in same network using Keras Functional API
from keras.layers import Dense, Input, Conv2D, MaxPooling2D, Flatten, Dense, concatenate
from keras.models import Model
# feature extraction from gray scale image
inputs = Input(shape = (28,28,1))
conv1 = Conv2D(16, (3,3), activation = 'relu', padding = "SAME")(inputs)
pool1 = MaxPooling2D(pool_size = (2,2), strides = 2)(conv1)
conv2 = Conv2D(32, (3,3), activation = 'relu', padding = "SAME")(pool1)
pool2 = MaxPooling2D(pool_size = (2,2), strides = 2)(conv2)
flat_1 = Flatten()(pool2)
# feature extraction from RGB image
inputs_2 = Input(shape = (28,28,3))
conv1_2 = Conv2D(16, (3,3), activation = 'relu', padding = "SAME")(inputs_2)
pool1_2 = MaxPooling2D(pool_size = (2,2), strides = 2)(conv1_2)
conv2_2 = Conv2D(32, (3,3), activation = 'relu', padding = "SAME")(pool1_2)
pool2_2 = MaxPooling2D(pool_size = (2,2), strides = 2)(conv2_2)
flat_2 = Flatten()(pool2_2)
# concatenate both feature layers and define output layer after some dense layers
concat = concatenate([flat_1,flat_2])
dense1 = Dense(512, activation = 'relu')(concat)
dense2 = Dense(128, activation = 'relu')(dense1)
dense3 = Dense(32, activation = 'relu')(dense2)
output = Dense(10, activation = 'softmax')(dense3)
# create model with two inputs
model = Model([inputs,inputs_2], output)
I would also like to know if there is a way of pulling the output of
intermediate layer through the Sequential model architecture.
Yes, For any operation which is to be carried out on the layers of a Keras model, first, we need to access the list of keras.layers object which a model holds
model_layers = model.layers
Each Layer object in this list has its own inputandoutput tensors (if you're using the TensorFlow backend)
input_tensor = model.layers[ layer_index ].input
output_tensor = model.layers[ layer_index ].output
Below am showing how to pull output from intermediate layer from Sequential Network
model = Sequential()
model.add(Conv2D(8, [3,3], input_shape=(28,28,1), activation='relu', padding ='same'))
model.add(MaxPool2D([2,2], 2, padding='valid'))
model.add(Conv2D(16, [3,3], activation='relu', padding='same'))
model.add(MaxPool2D([2,2], 2, padding='valid'))
model.add(Flatten())
model.add(Dense(256))
model.add(Dense(64))
model.add(Dense(1, activation='sigmoid'))
model.summary()
To pull output of 4th Layer,
output_tensor = model.layers[3].output
print(output_tensor)
Output:
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 28, 28, 8) 80
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 14, 14, 8) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 14, 14, 16) 1168
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 7, 7, 16) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 784) 0
_________________________________________________________________
dense_1 (Dense) (None, 256) 200960
_________________________________________________________________
dense_2 (Dense) (None, 64) 16448
_________________________________________________________________
dense_3 (Dense) (None, 1) 65
=================================================================
Total params: 218,721
Trainable params: 218,721
Non-trainable params: 0
_________________________________________________________________
Tensor("max_pooling2d_2/MaxPool:0", shape=(None, 7, 7, 16), dtype=float32)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/56953211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Ring Buffer on C I just started learning C language. Now my task is to write a simple ring buffer.
I wrote a code but it doesn't work. I can't solve the problem, obviously, I indicated wrong parameters in push and pop functions. It's needed to use head, tail and size of the buffer (the problem is in tail i think but can't properly get).
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
struct ringBuffer
{
void *bufferData;
int head;
int tail;
int size;
int numElements;
};
void bufferInitialization(struct ringBuffer *buffer, int size)
{
buffer->size = size;
buffer->head = 0;
buffer->tail = 0;
buffer->numElements = 0;
buffer->bufferData = (void*)malloc(sizeof(int)*size);
}
void bufferFree(struct ringBuffer *buffer)
{
free(buffer->bufferData);
}
int pushBack(struct ringBuffer *buffer, int *data)
{
/* int i;
i = buffer->head + buffer->tail + 1;
if (i >= buffer->size)
{
i = 0;
}
buffer->bufferData[i] = data;*/
memcpy((void*)buffer->head, data, buffer->size);
buffer->head = buffer->head + buffer->size;
if (buffer->head == buffer->tail)
{
buffer->head = (int)buffer->bufferData; //error?
}
buffer->numElements++;
return 0;
}
int popFront(struct ringBuffer *buffer, void *data)
{
//void * bufferData;
/*bufferData = buffer->bufferData[buffer->head];
buffer->head++;
buffer->tail--;
if (buffer->head == buffer->size)
{
buffer->head = 0;
}
//return bufferData;*/
memcpy(data, (void*)buffer->tail, buffer->size); //error?
buffer->tail = buffer->tail + buffer->size;
if ((void*)buffer->tail == buffer->bufferData)
{
buffer->tail = (int)buffer->bufferData; //error?
}
buffer->numElements--;
return 0;
}
int main()
{
struct ringBuffer buffer;
int size = 5;
//*buffer->size = 6;
bufferInitialization(&buffer, size);
char *data[] = { "1" , "2", "3", "4" , "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20" };
for (int i = 0; i < size; i++)
{
printf("Push: data[%d] = %s\n", i, *data[i]);
pushBack(&buffer, (int*)data[i]);
}
printf("\n");
for (int i = 0; i < size; i++)
{
printf("PushBack: queue[%d] = %s\n", i, (ringBuffer*)popFront(&buffer, (void*)data[i])); // !!!
}
printf("\n");
for (int i = 0; i < size; i++)
{
printf("PopFront: data[%d] = %s\n", i, *data[i]);
pushBack(&buffer, (int*)data[i]);
}
printf("\n");
system("pause");
return 0;
}
Thanks for any help and advices!
A: Well, I decided to use only 4 values in buffer structure, it seems it works well. In the end of the code there is a dataBuffer check.
Now I need to write a printBuffer function just to show values in dataBuffer from HEAD to TAIL but I noticed a problem: everytime I write a values in the buffer, the difference between HEAD and TAIL is always 1 (as I understood when buffer is empty, size = 8 and there are only 6 values in data[], it has to be shown like bufferData[0] = 1 ... bufferData[5] = 6 but as result in works incorrectly.Could you explain me please how to bring the function printBuffer to acceptable form?
Thanks. Here is my code (it works and there are checking everywhere):
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
struct ringBuffer
{
int *bufferData;
int head;
int tail;
int size;
};
void bufferFree(struct ringBuffer *buffer)
{
free(buffer->bufferData);
}
void bufferInitialization(struct ringBuffer *buffer, int size)
{
buffer->size = size;
buffer->head = 0;
buffer->tail = 0;
buffer->bufferData = (int*)malloc(sizeof(int) * size);
}
void printBuffer(struct ringBuffer *buffer, int i, int size)
{
printf("Values from HEAD to TAIL: ");
if (buffer->head == buffer->tail)
{
printf("Head and tail are equals\n");
}
else
{
printf("bufferData[%d] = %d\n", i, buffer->bufferData);
}
}
int pushBack(struct ringBuffer *buffer, int data)
{
buffer->bufferData[buffer->tail++] = data;
if (buffer->tail == buffer->size)
{
buffer->tail = 0;
}
return 0;
}
int popFront(struct ringBuffer *buffer)
{
if (buffer->head != buffer->tail)
{
buffer->head++;
if (buffer->head == buffer->size)
{
buffer->head = 0;
}
}
return 0;
}
int main(int argc, char* argv[])
{
struct ringBuffer buffer;
int size = 8;
int data[] = { 11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30 }; // 20 values
int dataSize = sizeof(data)/sizeof(data[0]);
/* Test implimention with 1 element in dataBuffer */
bufferInitialization(&buffer, size);
printf("Head : %d - Tail: %d\n", buffer.head, buffer.tail);
pushBack(&buffer, 5);
printf("Head : %d - Tail: %d\n", buffer.head, buffer.tail);
popFront(&buffer);
printf("Head : %d - Tail: %d\n", buffer.head, buffer.tail);
printf("\nnumElements in data = %d : bufferSize = %d\n\n", dataSize, size);
bufferFree(&buffer);
/* Implimention with dada[] */
printf("INITIALIZATION\n");
bufferInitialization(&buffer, size);
printf("Head : %d - Tail: %d\n", buffer.head, buffer.tail);
/* pushBack call */
printf("\nPUSHBACK\n\n");
for (int i = 0; i < dataSize; i++)
{
pushBack(&buffer, data[i]);
printf("Head : %d - Tail : %d :: Data = %d (data[%d]) \n", buffer.head, buffer.tail, data[i], i);
/*for (int k = buffer.head; k<=buffer.tail; k++)
{
// Print methode from head to tail
printBuffer((ringBuffer*)buffer.bufferData, i, size);
}*/
popFront(&buffer);
}
popFront(&buffer);
printf("Head : %d - Tail : %d :: (popFront)\n", buffer.head, buffer.tail);
/* bufferData check */
printf("\nbufferData check:\n");
for (int i = 0; i < size; i++)
{
printf("[%d] = %d ", i, buffer.bufferData[i]);
}
printf("\nHead : %d - Tail : %d\n", buffer.head, buffer.tail);
bufferFree(&buffer);
system("pause");
return 0;
}
A: #include <stdio.h>
#include <stdlib.h>
#include <string.h>
struct ringBuffer
{
char *bufferData;
void *bufferEnd;
int head;
int tail;
int size;
int used;
int capacity; // "вместимость"
};
void bufferInitialization(struct ringBuffer *buffer, int capacity, int size)
{
buffer->bufferData = (char*)malloc(capacity * size);
if (buffer->bufferData == 0)
{
buffer->bufferEnd = (char *)buffer->bufferData + capacity * size;
}
buffer->capacity = capacity;
buffer->used = 0;
buffer->size = size;
buffer->head = *buffer->bufferData;
buffer->tail = *buffer->bufferData;
}
void bufferFree(struct ringBuffer *buffer)
{
free(buffer->bufferData);
}
int pushBack(struct ringBuffer *buffer, char *data)
{
if (buffer->used == buffer->capacity) {
printf("Capacity error\n");
buffer->bufferData = 0; //??
}
memcpy((ringBuffer*)buffer->head, data, buffer->size);
buffer->head = buffer->head + buffer->size;
if (buffer->head == (char)buffer->bufferEnd)
{
buffer->head = (char)buffer->bufferData;
}
buffer->used++;
return 0;
}
int popFront(struct ringBuffer *buffer, char *data)
{
if (buffer->used == 0)
{
printf("Buffer is clear\n");
}
memcpy(data, (ringBuffer*)buffer->tail, buffer->size);
buffer->tail = (char)buffer->tail + buffer->size;
if (buffer->tail == (char)buffer->bufferEnd)
{
buffer->tail = (char)buffer->bufferData;
}
buffer->used--;
return 0;
}
int main()
{
struct ringBuffer buffer;
int size = 6;
int capacity = 10;
bufferInitialization(&buffer, capacity, size);
char *data[] = { "1" , "2", "3", "4" , "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20" };
for (int i = 0; i < size; i++)
{
printf("Push: data[%d] = %d\n", i, *data[i]);
pushBack(&buffer, (char*)data[i]);
}
printf("\n");
for (int i = 0; i < size; i++)
{
printf("PushBack: queue[%d] = %s\n", i, (ringBuffer*)popFront(&buffer, (char*)data[i])); // !!!
}
printf("\n");
for (int i = 0; i < size; i++)
{
printf("PopFront: data[%d] = %s : %s\n", i, *data[i]);
pushBack(&buffer, (char*)data[i]);
}
printf("\n");
system("pause");
return 0;
}
A: This is a sample without memcpy. So you cann see the problem with the buffer end while copying. It's not tested but shows how you can go on. It's a char ringbuffer, so you have to write chars. There is no check for \0, so your size must include them if you read and write "strings".
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
struct ringBuffer
{
char *bufferData;
char *head;
char *tail;
char *end;
int size;
int free;
};
void bufferInitialization(struct ringBuffer *buffer, int size)
{
buffer->size = size;
buffer->used = 0;
buffer->head =
buffer->tail =
buffer->bufferData = malloc(size);
buffer->end = buffer->bufferData + size;
}
int pushBack(struct ringBuffer *buffer, char *data, int size)
{
if(size > buffer->size - buffer->used) return -1;
for( ; size>0 && buffer->used < buffer->size; buffer->used++, size--) {
*buffer->head = *data;
buffer->head++;
data++;
if(buffer->head == buffer->end) buffer->head = buffer->bufferData;
}
return 0;
}
int popFront(struct ringBuffer *buffer, char *data, int size)
{
if(size > buffer->used) return -1;
for( ; size>0 && buffer->used > 0; buffer->used--, size--) {
*data = *buffer->tail;
buffer->tail++;
data++;
if(buffer->tail == buffer->end) buffer->tail = buffer->bufferData;
}
return 0;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/42903600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: No rollback with quarkus, mutiny and reactive postgresql I am trying to get 3 inserts executed within the same transaction, but I am not able to get the transaction rolled back when one of the inserts fail.
I am new in the reactive world and this is my very first reactive application.
Here is a simplification of the database model:
EntityA 1---N EntityB
EntityA 1---N EntityC
I want to execute the following inserts within the same transaction:
INSERT INTO A
INSERT INTO B --(failing query)
INSERT INTO C
But, when the second insert fails, the first insert does not rollback.
I've got the following classes:
*
*Processor: receives a message from kafka and triggers the inserts though a Service
*Service: runs the 3 inserts by using 3 DAOs
*EntityADao: runs the insert of the entity A
*EntityBDao: runs the insert of the entity B
*EntityBDao: runs the insert of the entity C
@ApplicationScoped
public class Processor {
private final Service service;
public Processor(final Service service) {
this.service = service;
}
@Incoming("input-channel")
@Outgoing("output-channel")
public Uni<Message<RequestMessage>> process(final Message<RequestMessage> message) {
final RequestMessage rm = message.getPayload();
return service.saveEntities(rm)
.onFailure()
.recoverWithItem(e -> {
final String errorMessage = "There was an unexpected error while saving entities";
LOG.error(errorMessage, e);
return Result.KO;
})
.flatMap(result -> {
rm.setResult(result);
return Uni.createFrom()
.item(Message.of(rm), message::ack))
});
}
}
@ApplicationScoped
public class WorkerService {
private final EntityADao entityADao;
private final EntityBDao entityBDao;
private final EntityCDao entityCDao;
public WorkerService(final EntityADao entityADao,
final EntityBDao entityBDao,
final EntityCDao entityCDao) {
this.entityADao = entityADao;
this.entityBDao = entityBDao;
this.entityCDao = entityCDao;
}
@Transactional(TxType.REQUIRED)
public Uni<Result> saveEntities(final RequestMessage requestMessage) {
return Uni.createFrom().item(Result.OK)
// Save Entity A
.flatMap(result -> {
LOG.debug("(1) Saving EntityA ...");
return entityADao.save(requestMessage.getEntityAData());
})
// Save Entity B
.flatMap(result -> {
LOG.debug("(2) Saving EntityB ...");
return entityBDao.save(requestMessage.getEntityBData());
})
// Save Entity C
.flatMap(result -> {
LOG.debug("(3) Saving EntityC ...");
return entityCDao.dao(requestMessage.getEntityCData());
})
// Return OK
.flatMap(result -> Uni.createFrom().item(Result.OK));
}
}
@ApplicationScoped
public class EntityADao {
private final PgPool client;
public EntityADao(final PgPool client) {
this.client = client;
}
@Transactional(TxType.MANDATORY)
public Uni<Result> save(final EntityAData entityAData) {
return client
.preparedQuery(
"INSERT INTO A(col1, col2, col3) " +
"VALUES ($1, $2, $3)")
.execute(Tuple.of(entityAData.col1(), entityAData.col2(), entityAData.col3()))
.flatMap(pgRowSet -> {
LOG.debug("Inserted EntityA!");
return Result.OK;
});
}
}
EntityBDao and EntityCDao are like EntityADao.
I have already added the following dependencies to pom.xml:
*
*quarkus-smallrye-context-propagation
*quarkus-narayana-jta
Why when the INSERT B query in EntityBDao fails, the previously executed query (INSERT A) does not rollback? What do am I missing? What would I have to change in order to get this working?
A: This paragraph recently added to our Quarkus documentation should help you with this: https://quarkus.io/guides/reactive-sql-clients#transactions .
It specifically explains how to deal with transactions when using the Reactive SQL clients.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/62248606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Can I run both express server and webpack together I'm a new to webpack and I'm currently creating a node express server application with ejs, and I want to use webpack to bundle my project.
Is there a way I can bundle up all my project files and make them appear in the dist folder
How can I make the express server to run with webpack (cause when I set the express as entry point it doesn't run the server)
Thanks in advance.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/70512507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to deploy a MeteorJS app to Windows Azure? How does one deploy a production MeteorJS app to Windows Azure?
A: Yes it is. See http://www.meteorpedia.com/read/Deploying_to_a_PaaS
In most cases this is as simple as using "meteor bundle",
demeteorizer, and then uploading the resulting files with your PaaS
provider's CLI deploy tool.
Demeteorizer wraps and extends Meteor’s bundle command by creating
something that more closely resembles a standard looking Node.js
application, complete with a package.json file for dependency
management.
$ cd /my/meteor/app
$ demeteorizer -o /my/node/app
$ cd /my/node/app
$ npm install
$ export MONGO_URL='mongodb://user:password@host:port/databasename?autoReconnect=true&connectTimeout=60000'
$ export PORT=8080
$ forever start main.js
Forever keeps your app running after a disconnect or crash, but not a reboot unless you manually add a boot entry.
The whole deploy is much easier using Meteor Up instead. Or maybe mups, though that doesn't even have updated docs.
To run a Meteor app in an Azure web app:
Azure Web App
Python 2.7
Websockets ON (optional)
WEBSITE_NODE_DEFAULT_VERSION 0.10.32 (default)
ROOT_URL http://webapp.azurewebsites.net
MONGO_URL mongodb://username:password@instance.mongolab.com:36648/dbname (For advanced apps. Request log should say if you need it.)
Dev Machine
Install Visual Studio Community 2015
Install Node 0.12.6
Install Meteor MSI
app> demeteorizer -o ..\app-dem
app-dem\programs\server\packages\webapp.js change .PORT line to "var localPort = process.env.PORT"
app-dem\package.json change "node": "0.10.36" to "node": "0.12.6"
app-dem> npm install
app-dem> git init
app-dem> git add -A .
app-dem> git commit -m "version 1.0 demeteorized Meteor + tweaks"
app-dem> git remote add azure https://username@webapp-slot.scm.azurewebsites.net:443/webapp.git
app-dem> git config http.postBuffer 52428800
app-dem> git push azure master
Instead of demeteorizer -o, perhaps you could use meteor build and create a package.json in the output root:
{
"name": "App name",
"version": "0.0.1",
"main": "main.js",
"scripts": {
"start": "node main.js"
},
"engines": {
"node": "0.12.6"
}
}
If bcrypt doesn't compile, make sure to use a more recent version:
"dependencies": {
"bcrypt": "https://registry.npmjs.org/bcrypt/-/bcrypt-0.8.4.tgz"
}
A: Before starting make sure your have install'd a 32 bit version of nodejs and have run "npm -g install fibers" on your windows build machine. Default nodejs on azure is running 32 bit only!
Note: this will not work if you'r using for example the spiderable package which relays on PhantomJS. PhantomJS can not be executed in a webapp on azure?
*
*In your project "meteor build ..\buildOut" and extract the .tar.gz file located in "..\buildOut".
*Place/create in "..\buildOut\bundle" a "package.json" containing:
{
"name": "AppName",
"version": "0.0.1",
"main": "main.js",
"scripts": {
"start": "node main.js"
},
"engines": {
"node": "0.12.6"
}
}
Note: Make sure "name" doesn't contain spaces, the deploy on azure will fail.
*On your favorite shell, goto "..\buildOut\bundle\programs\server" and run "npm install". This will pre download all the requirements and build them.
*Now open the file "..\buildOut\bundle\programs\server\packages\webapp.js" and search for "process.env.PORT".
it looks like this:
var localPort = parseInt(process.env.PORT) || 0;
alter this line into:
var localPort = process.env.PORT || 0;
This is needed so your meteor project can accept a named socket as soon as it runs in node. The function "parseInt" will not let a string go thru, the named socket is a string located in your webapp's environment. This my be done for a reason, a warning here! Now save this change an we are almost done...
*Solve the bcrypt issue: Download this file and extract it somewhere: https://registry.npmjs.org/bcrypt/-/bcrypt-0.8.4.tgz
Extract it.
Now replace the files located: "..\buildOut\bundle\programs\server\npm\npm-bcrypt\node_modules\bcrypt*"
with the directory's and file's located somewhere: ".\bcrypt-0.8.4\package*"
Now go on the shell in the directory "..\buildOut\bundle\programs\server\npm\npm-bcrypt\node_modules\bcrypt\" and make sure you remove the "node_modules" directory. If the node_modules directory is not removed npm will not build the package for some reason.
Run on the shell "npm install".
Make sure you set the "Environment" variables: "MONGO_URL" and "ROOT_URL" in the portal for you webapp.
If everything worked without an error, you can deploy your app to the git repository on the deployment slot for your webapp. Go to "..\buildOut\bundle" and commit the files there to the deployment slot's repository. This will course the deploy on the deployment slot and create the needed iis configuration file(s).
Now wait a little and your app should fire after some time... Your app should be running and you can access it on the *.azuresites.net
Thanks to all that made this possible.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/14266386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: iPad modal view's rect within the main window I am presenting a modal view (with UIModalPresentationFormSheet) on top of a UISplitViewController. I want to get the exact rect that the modal view will take relative to the UISplitViewController (which is basically the whole window). i.e. the modal view is at x,y coordinate and the size.
How would I find this? I looked at UIView's "convertRect:fromView:" method, but couldn't figure out what combination would work.
Thanks.
A: [splitView convertRect:modallyPresentedVC.view.bounds fromView:modallyPresentedVC.view] should do the trick. Make sure to call it in the completion block of the presentation (after all animation has finished).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/18668803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Primefaces update destroys Umlauts edit: tl;dr: When saving Umlauts, they get corrupted (ä turns into ä). The rest of my question didn't really have anything to do with the problem as I now realized.
.
I'm building a webapp via JBoss, Hibernate, Infinispan Cache, derby, Maven and Primefaces.
I display a page that fetches data from a database, which has correct data in it (with umlauts). It is displayed correctly in a
<p:dataTable id="dt1" var="as" value="#{aSBean.elementList}" ...>
There is a dialog popping up when one selects an entry from the table. The main part of the dialog code is
<p:dialog header="AS Detail" widgetVar="asDialog" resizable="false" id="asDlg"
showEffect="fade" hideEffect="fade" modal="true" styleClass="detailDialog" >
<h:panelGrid id="display" >
<h:outputText value="Bemerkung" />
<h:inputText value="#{aSBean.selectedElement.bemerkungTxt}" />
<h:outputText value="Bearbeiter" />
<h:outputText value="#{fehlerBean.selectedElement.bearbeiterNr}" />
</h:panelGrid>
<h:panelGrid id="diaBtnDisplay">
<p:commandButton value="Speichern" update=":form1:dt1" id="save" validateClient="true" actionListener="#{aSBean.save}"/>
<p:commandButton value="Abbrechen" id="cancel">
<f:ajax event="click" onevent="asDlg.hide()" />
</p:commandButton>
</h:panelGrid>
</p:dialog>
Now, there isn't even an update attribute in the Abbrechen-CommandButton but still the dataTable gets updated when I press this button. It does not if I leave the dialog via the X in the upper right corner.
But the moment I press Abbrechen, the dataTable gets updated and my ä turns into ä. But it will only do so for the selected element. Here is some piece of my backing bean code:
public Arbeitsschluessel selectedElement = new Arbeitsschluessel();
public Arbeitsschluessel newElement = new Arbeitsschluessel();
public Arbeitsschluessel getSelectedElement() {
return selectedElement;
}
public void setSelectedElement(Arbeitsschluessel selectedValue) {
if (selectedValue != null) {
this.selectedElement = selectedValue;
}
}
public List<Arbeitsschluessel> getElementList() {
return elementList;
}
so definately nothing special. My HTML page starts with <?xml version="1.0" encoding="UTF-8"?> and I also had the following included <meta http-equiv="content-type" content="text/html; charset=utf-8" />
I debugged the update process after pressing the Abbrechen button and for my n-th element, the content of the as var was wrong. The callstack looks the same every time, so I cannot say at what exact point the value gets corrupted.
If I reload the datatable via a button (dao.findAll from database), everything is again displayed correctly, except ofc when I saved a wrong value into the database. So it is not that the database values are corrupted. Any help appreciated!
Edit: Code to opening the dialog:
<p:commandButton id="selectButton" update=":form1:display" oncomplete="PF('asDialog').show()" icon="" title="View">
<f:setPropertyActionListener value="#{as}" target="#{aSBean.selectedElement}" />
</p:commandButton>
A: I found the answer. One has to use a CharacterEncodingFilter
import java.io.IOException;
import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.FilterConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
public class CharacterEncodingFilter implements Filter {
@Override
public void init(FilterConfig filterConfig) throws ServletException { }
@Override
public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain)
throws IOException, ServletException {
servletRequest.setCharacterEncoding("UTF-8");
servletResponse.setContentType("text/html; charset=UTF-8");
filterChain.doFilter(servletRequest, servletResponse);
}
@Override
public void destroy() { }
}
and then add the following lines to the web.xml in the WEB-INF folder:
<filter>
<filter-name>CharacterEncodingFilter</filter-name>
<filter-class>your.package.CharacterEncodingFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>CharacterEncodingFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/20687593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Converting an iPhone app to an iPad app in XCode 4 I had created an iPhone app long time back. Now I want to kind of convert the same into an iPad app (the code would more or less remain the same, I only want to redesign the xib for iPad size).
Now I am using XCode 4 and after opening the app, I changed the devices to iPad. But my .xib are still showing iPhone size.
It created an iPad folder (just like Classes/Resources/Products...).
Also to add, the app now opens an iPad simulator, but let's say I have a UIWebView which just stretches to iPhone size (as in IB) and not the complete iPad size...
How and where do I redesign the xib for iPad ? What are the updates to be made when we just change the devices from iPhone to iPad ? Also I guess an iPhone app would work on iPad (using 2x), but the reverse is not true.
A: You should look into creating a Universal app which basically has shared code and libraries for the iPhone and iPad but different view layers (views or XIBs).
In that model you have different interfaces for both which you should. The paradigms are different - in iPhone you have small real estate so you have navigators that drill in and pop out. In iPad, you have more real estate so you have master/detail splitter views with pop-over controls. As you pointed out, even the web views are different sizes at a minimum.
If you start fresh you can create a universal app and get a feel for how it's laid out. File, new project, iOS, pick app type, next. For device family select Universal.
If you are converting, there's some resources out there. Here's some:
http://useyourloaf.com/blog/2010/4/7/converting-to-a-universal-app-part-i.html
How to convert iPhone app to universal in xcode4, if I previously converted, but deleted MainWindow-iPad?
Convert simple iPhone app to Universal app
A: I find it easiest to open the xibs in the separate Interface Builder that came with previous (3.2?) XCode and use its "convert to iPad using autoresize masks" option. Then I include the new separate XIBS in the project and do conditional loading (eg - use the UI_USER_INTERFACE_IDIOM() makro to load one xib or another.)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/8321054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Cesar encryption I'm trying to write a function that, given an original message and an offset, computes and returns the encrypted message:
def cesar_encryption (message, offset = 1):
encrypted_message = ""
for i in range(len(message)):
char = message[i]
if (char.islower()):
encrypted_message += chr((ord(char) + offset)
return encrypted_message
print (cesar_encryption("I LOVE NATURE", 1))
I have a problem in my code. Please. How could I solve it?
A: def cesar_encryption (message, offset = 1):
encrypted_message = ""
for char in message:
encrypted_message += chr(ord(char) + offset)
return encrypted_message
print (cesar_encryption("I LOVE NATURE", 1)) # J!MPWF!OBUVSF
Just remove .islower(), because you don't need it.
More about this can be learned at - https://www.geeksforgeeks.org/caesar-cipher-in-cryptography/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/69764525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: PHP Thread compilation not working I am attempting to use pthreads with Apache FPM.
Step 1.
After installing and recompiling php according to:
https://blog.programster.org/ubuntu16-04-compile-php-7-2-with-pthreads
The server works as expected and I can run pthreads from CLI.
Step 2.
Then I need to run threads from a web server so I followed the instructions from:
https://antrecu.com/blog/run-php7-fpm-apache-mpmevent-ubuntu-1604
After sudo service apache2 restart && sudo service php7.0-fpm restart:
Job for apache2.service failed because the control process exited with error code.
See "systemctl status apache2.service" and "journalctl -xe" for details.
$ systemctl status apache2.service
apache2.service - LSB: Apache2 web server
Loaded: loaded (/etc/init.d/apache2; bad; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: failed (Result: exit-code) since Mon 2018-03-12 17:09:45 PDT; 3min 35s ago
Docs: man:systemd-sysv-generator(8)
Process: 30818 ExecStop=/etc/init.d/apache2 stop (code=exited, status=0/SUCCESS)
Process: 32443 ExecStart=/etc/init.d/apache2 start (code=exited, status=1/FAILURE)
Mar 12 17:09:45 ubuntu apache2[32443]: * The apache2 configtest failed.
Mar 12 17:09:45 ubuntu apache2[32443]: Output of config test was:
Mar 12 17:09:45 ubuntu apache2[32443]: [Mon Mar 12 17:09:45.084452 2018] [:crit] [pid 32454:tid 139629110323072]
Apache is running a threaded MPM, but your PHP Module is not compiled to be threadsafe. You n.. (output cut off in SSH client)
Mar 12 17:09:45 ubuntu apache2[32443]: AH00013: Pre-configuration failed
Mar 12 17:09:45 ubuntu apache2[32443]: Action 'configtest' failed.
Mar 12 17:09:45 ubuntu apache2[32443]: The Apache error log may have more information.
Mar 12 17:09:45 ubuntu systemd[1]: apache2.service: Control process exited, code=exited status=1
Mar 12 17:09:45 ubuntu systemd[1]: Failed to start LSB: Apache2 web server.
Mar 12 17:09:45 ubuntu systemd[1]: apache2.service: Unit entered failed state.
Needless to say I am a newbie when it comes to compiling Linux packages.
Any suggestions?
A:
I am attempting to use pthreads with Apache FPM.
You can't. Find a way to work without them.
The pthreads extension cannot be used in a web server environment. Threading in PHP is therefore restricted to CLI-based applications only.
-- http://php.net/manual/en/intro.pthreads.php
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/49246475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: WOW.js does not work on page reload I am using WOW.js and skrollr to reveal some animations on my website.
I have a navbar with is located 370px from the top of the viewport initially and while the page is being scrolled down and when the bottom of the navbar is 70px from the top of the viewport stays on fixed position:
<div class="col-xs-12 navbar nopad wow slideInLeft animated" data-370-top="opacity:0.65;position:relative;" data-70-top-bottom="opacity:1;position:fixed;top:0px">
I also use the slideInLeft class of WOW.js to slide the navbar in.
The problem is that when the scrollbar is 370px from the top of the screen and the page is reload the wow animation does not occur.
Can anyone help me? What am I doing wrong?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/34815196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: IE renders inline-block div differently than FF/Chrome First, I'm using IE9, FF 29.0.1 and Chrome 35.0.1916.153 m. I have done quite a bit of searching, tried spans instead of divs, float instead of inline-block, different DOCTYPEs, adding a z-order, making sure each div had a unique ID, etc., etc. The problem becomes clear when you hit F12 in IE to look under the covers, IE simply renders the HTML/CSS differently than FF or Chrome. Here's the page. I have reduced this to its lowest common denominator.
http://iamix.net/p/ie-problem.html
The problem (as you will see) is the button text appears below the buttons on FF/Chrome, as it should, but on top of the buttons in IE9. When you look at how IE has rendered the page, it takes the first button_block div and makes it some kind of quasi-parent of the other three. FF (using Firebug) shows that it renders the HTML/CSS as expected, with each of the four button_block divs siblings.
Here's the IE & FF rendering:
http://iamix.net/p/rendering.html (guess I need a rep to directly post images... ooo, guess I need a rep to post more than two links, so I've put both images here)
The overall goal here is to have a layout that adjusts well to different screen sizes, including smartphones. The original has some media queries in it to adjust for screen size but I have removed everything I could with the problem still being present. The reason for loading the images in CSS is because different image sizes are used based on the screen size (which you would see if the media queries were still present). The full-blown HTML/CSS works well in FF/Chrome on the PC and on Android phones (which is the main target). IE is the stickler (as usual). Even as I've been typing this up I've tried about eight other things because I really don't want it to be something silly that I've overlooked.
Here's the basic code I'm working from where the problem still exists (from the first link above):
<!DOCTYPE html>
<html>
<head>
<style type="text/css">
.button_bar {
padding-bottom: 0.5em;
text-align:center;
}
.button_bar button {
border: none;
height: 250px;
width: 270px;
}
#first_button {
background: url(http://iamix.net/p/first-button.png) 0px/250px no-repeat;
background-position: center center;
}
#second_button {
background: url(http://iamix.net/p/second-button.png) 0px/250px no-repeat;
background-position: center center;
}
#third_button {
background: url(http://iamix.net/p/third-button.png) 0px/250px no-repeat;
background-position: center center;
}
#fourth_button {
background: url(http://iamix.net/p/fourth-button.png) 0px/250px no-repeat;
background-position: center center;
}
.button_block {
display: inline-block;
padding: 0.5em 0 1em 0;
vertical-align: top;
}
.button_text {
background: #FFFF80;
color: black;
display: inline-block;
font-weight: bolder;
text-align: center;
width: 240px;
word-wrap: break-all;
}
</style>
</head>
<body>
<div>
<div class="button_bar">
<div class="button_block">
<div>
<button id="first_button" type="button" />
</div>
<div>
<span class="button_text">This is the First Button</span>
</div>
</div>
<div class="button_block">
<div>
<button id="second_button" type="button" />
</div>
<div>
<span class="button_text">And this is the Second Button</span>
</div>
</div>
<div class="button_block">
<div>
<button id="third_button" type="button" />
</div>
<div>
<span class="button_text">Which would make this the Third Button</span>
</div>
</div>
<div class="button_block">
<div>
<button id="fourth_button" type="button" />
</div>
<div>
<span class="button_text">And this the Fourth and Final Button</span>
</div>
</div>
</div>
</div>
</body>
</html>
A: In HTML, button is not a "self-closing" tag, therefore, IE is actually doing it correctly. Just appending the /> to a tag does not automatically close it, as it does in XML. You need to do this:
<button id="fourth_button" type="button"><span>Button Text</span></button>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/24210773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I ignore files in mercurial which has already been tracked? So what I really wish to do is to ignore some files that have already been added committed and pushed in the repository in mercurial .I have a project with the following directory structure.
Project X
Project X a
bin
res
src
.hg
.hgignore
Now all the files in the Project X has been tracked by mercurial . Now I want to ignore /bin and /res folder
Here is my glob syntax to ignore these directories in .hgignore file.
syntax: glob
/bin
/gen
Also i executed the following command to tell mercurial to forget the previously tracked unwanted files
hg forget /bin
hg forget /res
However mercurial still is tracking both of these folders.I am sort of lost here.Any suggestion would be appreciated.
A: You are right: the easiest way is to tell Mercurial to forget the files (by using hg forget).
However Mercurial is not tracking directories, only files. You cannot add a directory and thus cannot forget it either. You probably have files under bin and res that have been added to the list of tracked files: those are the ones you need to forget.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/24473472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Can anyone help to print hello world? With using basic instructions only here what I did Here is what I have:
.data
.asciiz "c"
.asciiz "hello world\n"
.globl main
.text
main:
lui $a0, 0x1002 # set $a0 to start of string
addi $v0, $0, 4 # set command to print
syscall
A: Use a label (e.g, mylabel:) to let the assembler know the address of the string you want to print, and then reference it with la pseudoinstruction:
.data
.asciiz "c"
mylabel:
.asciiz "hello world\n"
.globl main
.text
main:
la $a0, mylabel
addi $v0, $0, 4 # set command to print
syscall
Otherwise you should know the location where your string if located.
If you want to know how the assembler translates la, you can look at the code generated and translate it again yourself (it should be an lui followed by an ori). Mars simulator lets you see how the la is translated.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30378421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: ViewStub with custom View attributes for target layout Is it possible to pass custom attributes through a ViewStub to the target layout's root element? Like so:
<ViewStub
android:layout="@layout/custom_view"
app:customAttr="12345"
/>
Where custom_view.xml is:
<blah.CustomView ...>
...
</blah.CustomView>
When I try to do that, CustomView.java does not get "app:customAttr" in the AttributeSet.
When I use CustomView directly, without ViewStub
<blah.Custom app:customAttr="12345"/>
The attribute gets into the AttributeSet ok.
But it's not lazy anymore.
Any solutions?
Thank you,
Yuri.
A: Like <include> the only attributes that ViewStub lets you override are the layout attributes and which id the child view will have after inflation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/3109843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Fade In With data on AngularJS I want to show my post->id with fade in, I was try to search but I can't find it. Please Help me!!!
*forgive me if I have bad language
A: In Angularjs, You wanna start working with basic animation stuff, you can know the Animation.
Basic fade out Example:
HTML:
<div ng-controller="myCtrl">
<button ng-click="hideStuff()">Click me!</button>
<div class="default" ng-hide="hidden" ng-class="{fade:
startFade}">This will get hidden!</div>
</div>
CSS:
.default{
opacity: 1;
}
.fade{
-webkit-transition: opacity 2s; /* For Safari 3.1 to 6.0 */
transition: opacity 2s;
opacity: 0;
}
Angular:
var myApp = angular.module('myApp',[]);
myApp.controller("myCtrl", function($scope, $timeout){
$scope.hideStuff = function () {
$scope.startFade = true;
$timeout(function(){
$scope.hidden = true;
}, 2000);
};
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/47324797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is there a way to combine two variables? I have been trying to get my code to dynamically allocate class objects to file to later read but having trouble with getting user input to save into each different object.
I'm trying to have the user input their names, ages and phone numbers and have it save to file where it can be read later hopefully using the same method to run through the file.
I tried using arrays but that can't save all three fields of the object. Is there a dynamic variable that can be used?
#include <iostream>
#include <fstream>
#include <string>
#include <vector>
#include <cassert>
using namespace std;
string mName, mID, mPhoneNumber;
int id = 0;
class Student
{
public:
string mName;
string mId;
string mPhoneNumber;
Student(string id = "", string name = "", string phone = "") : mId(id), mName(name), mPhoneNumber(phone)
{}
bool operator==(const Student& obj)
{
return (mId == obj.mId) && (mName == obj.mName) && (mPhoneNumber == obj.mPhoneNumber);
}
/*
* Write the member variables to stream objects
*/
friend ostream& operator << (ostream& out, const Student& obj)
{
out << obj.mId << "\n" << obj.mName << "\n" << obj.mPhoneNumber << endl;
return out;
}
/*
* Read data from stream object and fill it in member variables
*/
friend istream& operator >> (istream& in, Student& obj)
{
in >> obj.mId;
in >> obj.mName;
in >> obj.mPhoneNumber;
return in;
}
};
int main()
{
cin >> id;
Student stud1("1", "Jack", "4445554455");
Student stud2("4", "Riti", "4445511111");
Student stud3("6", "Aadi", "4040404011");
// open the File
ofstream out("students.txt");
// Write objects to file (targets to cout)
out << stud1;
out << stud2;
out << stud3;
out.close();
// Open the File
ifstream in("students.txt");
Student student1;
Student student2;
Student student3;
// Read objects from file and fill in data
in >> student1;
in >> student2;
in >> student3;
in.close();
// Compare the Objects
assert(stud1 == student1);
assert(stud2 == student2);
assert(stud3 == student3);
cout << stud1 << endl;
cout << stud2 << endl;
cout << stud3 << endl;
return 0;
}
A: You can make use of std::vector in the following manner:
std::vector<Student> my_students;
for (std::size_t i = 0; i < 3; i++) {
Student tmp;
in >> tmp;
my_students.push_back(tmp);
}
A: std::vector<Student> aVectOfStudents;
aVectOfStudents.emplace_back("","Jack", "4445554455");
aVectOfStudents.emplace_back("","Riti", "4445511111");
aVectOfStudents.emplace_back("","Aadi", "4040404011");
ofstream out("students.txt");
for(auto studIter = aVectOfStudents.begin(); studIter != aVectOfStudents.end(); ++studIter)
{
std::cout << "Insert Id for student: " << studIter->mName << "\n";
std::cin >> studIter->mId;
out<<*studIter;
}
out.close();
A: You could use the std::vector, to store the Student s and iterate through them to file out/inputs.
#include <vector>
int main()
{
// open the File
std::fstream file{ "students.txt" };
// vector of students
std::vector<Student> students{
{"1", "Jack", "4445554455"},
{ "4", "Riti", "4445511111"},
{"6", "Aadi", "4040404011"}
};
// iterate throut the objects and write objects(i.e. students) to the file
for(const auto& student: students)
file << student;
// reset the stream to the file begin
file.clear();
file.seekg(0, ios::beg);
// clear the vector and resize to the number of objects in the file
students.clear();
students.resize(3);
// read objects from file and fill in vector
for (Student& student : students)
file >> student;
file.close();
return 0;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/61559974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Firebase: Transaction with async/await I'm trying to use async/await with transaction.
But getting error "Argument "updateFunction" is not a valid function."
var docRef = admin.firestore().collection("docs").doc(docId);
let transaction = admin.firestore().runTransaction();
let doc = await transaction.get(docRef);
if (!doc.exists) {throw ("doc not found");}
var newLikes = doc.data().likes + 1;
await transaction.update(docRef, { likes: newLikes });
A: If you look at the docs you see that the function passed to runTransaction is a function returning a promise (the result of transaction.get().then()). Since an async function is just a function returning a promise you might as well write db.runTransaction(async transaction => {})
You only need to return something from this function if you want to pass data out of the transaction. For example if you only perform updates you won't return anything. Also note that the update function returns the transaction itself so you can chain them:
try {
await db.runTransaction(async transaction => {
transaction
.update(
db.collection("col1").doc(id1),
dataFor1
)
.update(
db.collection("col2").doc(id2),
dataFor2
);
});
} catch (err) {
throw new Error(`Failed transaction: ${err.message}`);
}
A: IMPORTANT: As noted by a couple of the users, this solution doesn't use the transaction properly. It just gets the doc using a transaction, but the update runs outside of it.
Check alsky's answer. https://stackoverflow.com/a/52452831/683157
Take a look to the documentation, runTransaction must receive the updateFunction function as parameter. (https://firebase.google.com/docs/reference/js/firebase.firestore.Firestore#runTransaction)
Try this
var docRef = admin.firestore().collection("docs").doc(docId);
let doc = await admin.firestore().runTransaction(t => t.get(docRef));
if (!doc.exists) {throw ("doc not found");}
var newLikes = doc.data().likes + 1;
await doc.ref.update({ likes: newLikes });
A: The above did not work for me and resulted in this error: "[Error: Every document read in a transaction must also be written.]".
The below code makes use of async/await and works fine.
try{
await db.runTransaction(async transaction => {
const doc = await transaction.get(ref);
if(!doc.exists){
throw "Document does not exist";
}
const newCount = doc.data().count + 1;
transaction.update(ref, {
count: newCount,
});
})
} catch(e){
console.log('transaction failed', e);
}
A: In my case, the only way I could get to run my transaction was:
const firestore = admin.firestore();
const txRes = await firestore.runTransaction(async (tx) => {
const docRef = await tx.get( firestore.collection('posts').doc( context.params.postId ) );
if(!docRef.exists) {
throw new Error('Error - onWrite: docRef does not exist');
}
const totalComments = docRef.data().comments + 1;
return tx.update(docRef.ref, { comments: totalComments }, {});
});
I needed to add my 'collection().doc()' to tx.get directly and when calling tx.update, I needed to apply 'docRef.ref', without '.ref' was not working...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/49644614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Consider using varargs for methods or constructors I have a function :
public static String bytesToHex(byte[] bytes) {
...
}
And a tool about code style advise :
Consider using varargs for methods or constructors which take an array the last parameter
How can I edit it??
A:
Doc Says, As an API designer, you should use them sparingly, only when the
benefit is truly compelling.
vararg can be represented by three dots (...) that's just not going to look good with byte at least IMHO. I suggest you to stick with byte[] as in most cases of programming we will have byte[] and not singular byte elements and you won't benefit anything with varargs in this particular case.
public static String bytesToHex(byte... bytes) {
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31700140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Apache Benchmark - Randomized querystrings? I need to benchmark a site, and was thinking of using ab (Apache Benchmark) to do it.
We need to hammer it quite hard, and we're interested more in how our app will cope, as opposed to the network bandwidth, hence we're doing it from localhost.
The other thing is, we need to pass in a random list of different query strings:
i.e. http://search.site.com/?q=search_term
Is there any way to pass this in to ap somehow, or an alternative http benchmarker that can do that?
Or will we have to write a script to startup multiple instances of ab with different strings? I'd rather have it all run from the same instance of ab, if possible, rather than startup 10,000 instances of ap.
Cheers,
Victor
A: JMeter has a random variable configuration element for HTTP Request sampling.
A: You can create redirect.php which will contain anything you want. Remember, redirect.php itself will create additional load.
<?
$queries = array('query1', 'query2');
$query = $queries[rand(0, count($queries)-1)]
header('Location: http://search.site.com/?q='.urlencode( $query ));
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/1666842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: why does Electron "new Set([large iterable])" produce an empty set? So a simple thing like an 80 element array passed to new Set() produces an equally large set
new Set(Array(80).fill(0).map(Number.call, Number))
Set {0, 1, 2, 3, 4…}
However when it is larger from any source not just this quick filler function
new Set(Array(5000).fill(0).map(Number.call, Number))
Set {}
An empty set is returned in Electron 1.4.15
Is there some limitation I can't find in the docs?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43334666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Best approach for binding native query results in JAVA when the total number of parameters differs I have a spring rest api and i am using jpa to create a native query to fetch some data. But the query is not always the same. The number of parameters to be binded differ based on conditions. Here is the code that i am using:
public List<CustomObject> getMyData(String param1, String param2, String param3, int param4, .......) {
StringBuilder sql = new StringBuilder();
sql.append("SELECT //some columns")
.append(" FROM //some tables")
.append(" WHERE //joins and default conditions");
int counter = 0;
Map<Integer, Object> params = new LinkedHashMap();
if (!param1.isEmpty()) {
query.append("AND column1 = ?");
counter++;
params.put(counter, param1);
}
if (!param2.isEmpty()) {
query.append("AND column2 = ?");
counter++;
params.put(counter, param2);
}
// .... the same all remaining params
try {
Query q = emORA.createNativeQuery(sql.toString());
params.entrySet().forEach(entry -> {
q.setParameter(entry.getKey(), entry.getValue());
});
//execute query and fetch results
} catch () {
//handle exeptions
}
}
My question is:
Because the total number of parameters varies, i use a LinkedHashMap to store the position and the actual value to be binded. Is there a better approach than using LinkedHashMap for this situation?
Thank you all in advance.
A: This seems like a frail solution (but you know your constraints). For eg right now this sql will be invalid because when you append the param strings you forgot to put a leading white space. And maybe you'll need to have different comparators in the future, not only =. Criteria API seems like a better tool for the job
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/67161253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to properly display HTML entities in perl I was writing a web crawler using PERL, and I realized there was a weird behavior when I try to display string using HTML::Entities::decode_entities.
I was handling strings that contain contain Chinese characters and strings like Jìngyè.
I used HTML::Entities::decode_entities to decode chinese characters, which works well. However, when the string contain no Chinese characters, the string displayed weirdly (J�ngy�).
I wrote a small code to test different behaviors on 2 strings.
String 1 is "No. 22, J�ngy� 3rd Road, Jhongshan District, Taipei City, Taiwan 10466" and string 2 was "104 Taiwan Taipei City Jhongshan District J�ngy� 3rd Road 20號".
Below is my code:
print "before: $1\n";
my $decoded = HTML::Entities::decode_entities($1."號");#I add the last character just for testing
print "decoded $decoded\n";
my $chopped = substr($decoded, 0, -1);
print "chopped: $chopped\n";
These are my results:
before: No. 22, J�ngy� 3rd Road, Jhongshan District, Taipei City, Taiwan 10466
decoded No. 22, Jìngyè 3rd Road, Jhongshan District, Taipei City, Taiwan 10466號 (correct)
chopped: No. 22, J�ngy� 3rd Road, Jhongshan District, Taipei City, Taiwan 10466 (incorrect)
before: 104 Taiwan Taipei City Jhongshan District J�ngy� 3rd Road 20號
decoded 104 Taiwan Taipei City Jhongshan District Jìngyè 3rd Road 20號號 (correct)
chopped: 104 Taiwan Taipei City Jhongshan District Jìngyè 3rd Road 20號 (correct)
Can someone please explain me why was this happening? And how to solve this so that my String will display properly.
Thank you very much.
Sorry, I did not make my question clear, below is the code I wrote, where URL is http://maps.google.com/maps/place?cid=10931902633578573013:
sub getInfoURLs {
my ($url) = @_;
unless (defined $url){
print 'URL was not defined when extracting info\n';
return 0;
}
my $contain_request = LWP::UserAgent->new->get($url);
if($contain_request -> is_success){
my $contain_content = $contain_request -> decoded_content;
#store address
if ($contain_content =~ m/$address_pattern/i){
print "before: $1\n";
my $decoded = HTML::Entities::decode_entities($1."號");
print "decoded $decoded\n";
my $chopped = substr($decoded, 0, -1);
print "chopped: $chopped\n";
#unicode conversion
#store in database
}
}
}
A: First, always use use strict; use warnings;!!!
The problem is that you're not encoding your output. File handles can only transmit bytes, but you're passing decoded text.
Perl will output UTF-8 (-ish) when you pass something that's obviously wrong. chr(0x865F) is obviously not a byte, so:
$ perl -we'print "\xE8\x{865F}\n"'
Wide character in print at -e line 1.
è號
But it's not always obvious that something is wrong. chr(0xE8) could be a byte, so:
$ perl -we'print "\xE8\n"'
�
The process of converting a value into to a series of bytes is called "serialization". The specific case of serializing text is known as character encoding.
Encode's encode is used to provide character encoding. You can also have encode called automatically using the open module.
$ perl -we'use open ":std", ":locale"; print "\xE8\x{865F}\n"'
è號
$ perl -we'use open ":std", ":locale"; print "\xE8\n"'
è
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7386125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do you make a function similar to jar -uf in Java? Using the standard Java API (i.e. the user doesn't need the JDK to run it), how do you make a function similar to jar -uf in Java? I have an idea that it will use the Zip library, unzip the .jar file, change the contents, and then rezip it, but I'm a newbie at Java, so I cannot really make it. Could you give an example or a function?
Thanks!
Edit: I think I found the answer here: How to use JarOutputStream to create a JAR file?
I'll see if it works.
A: Take a look at JarInputStream, JarOutputStream, ZipInputStream, and ZipOutputStream. They are a part of the JDK, so external libraries are not required. Pay attention that unlike other streams these require you to care about current entry. You can find a lot of examples of how to use the java zip API.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/9280670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I keep Cognos from splitting my subgroups across pages? I am reporting off of a cube into a crosstab with the results being summarized. For example:
| Val 1 | Val 2
----------------------------------------------------
Division 1 | Region 1 | Location 1 | 5.00 | 3.00
| |-----------------------------
| | Location 2 | 2.00 | 6.00
| |-----------------------------
| | Reg Sum | 7.00 | 9.00
|----------------------------------------
| Region 2 | Location 1 | 4.00 | 3.00
| |-----------------------------
| | Location 2 | 12.80 | 7.40
| |-----------------------------
| | Location 3 | 5.00 | 11.00
| |-----------------------------
| | Reg Sum | 21.80 | 21.40
|----------------------------------------
| Div Sum | 28.80 | 30.40
----------------------------------------------------
Division 2 | Region 1 | Location 1 | 3.00 | 12.85
etc...
Currently, Cognos is breaking the page in the middle of the Division 1/Region 2 subgroup. If the whole region subgroup (including summary) doesn't fit on the remainder of the page, I want it to go to the next page.
I've tried pagination at the region and location level. That has not given me the results I want. Any ideas as to how I can achieve this?
EDIT: Forgot to mention that this is Cognos 8.4.
A: well, a litle bit out-dated, but nevertheless here is what worked for me:
*
*unlock
*place a 1x1 table object inside the Regions cell
*place the regions name inside the table object (just drag it inside
the table)
*in the table's paginations properties deselect the option Allow row
contents to break across pages
you might also need to specify the height of the table object.
A: You can use Page breaks to achieve this.
In your example, click on Region data item. From Structure menu, click "Set Page Break using Master/Detail".
That will do the trick.
Please note, for crosstab and chart, report studio creates Page Break using Master/Detail relationship. If you were using List or repeater, then you could use the option of creating page break without Master/Detail relationship.
Please refer to IBM Infocenter for more information on Page Breaks and Page Sets. http://publib.boulder.ibm.com/infocenter/c8bi/v8r4m0/index.jsp?topic=/com.ibm.swg.im.cognos.c8bi.doc/welcome.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/8759411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Can't perform a React state update on an unmounted component. This is a no-op That's the warning in the console,
Warning: Can't perform a React state update on an unmounted component. This is a no-op, but it indicates a memory leak in your application. To fix, cancel all subscriptions and asynchronous tasks in a useEffect cleanup function.
Here is my code
const [index, setIndex] = useState(0);
const [refreshing, setRefreshing] = useState(false);
const refContainer: any = useRef();
const [selectedIndex, setSelectedIndex] = useState(0);
const navigation = useNavigation();
useEffect(() => {
refContainer.current.scrollToIndex({animated: true, index});
}, [index]);
const theNext = (index: number) => {
if (index < departments.length - 1) {
setIndex(index + 1);
setSelectedIndex(index + 1);
}
};
setTimeout(() => {
theNext(index);
if (index === departments.length - 1) {
setIndex(0);
setSelectedIndex(0);
}
}, 4000);
const onRefresh = () => {
if (refreshing === false) {
setRefreshing(true);
setTimeout(() => {
setRefreshing(false);
}, 2000);
}
};
What should I do to make clean up?
I tried to do many things but the warning doesn't disappear
A: setTimeout need to use in useEffect instead. And add clear timeout in return
useEffect(() => {
const timeOut = setTimeout(() => {
theNext(index);
if (index === departments.length - 1) {
setIndex(0);
setSelectedIndex(0);
}
}, 4000);
return () => {
if (timeOut) {
clearTimeout(timeOut);
}
};
}, []);
A: Here is a simple solution. first of all, you have to remove all the timers like this.
useEffect(() => {
return () => remover timers here ;
},[])
and put this
import React, { useEffect,useRef, useState } from 'react'
const Example = () => {
const isScreenMounted = useRef(true)
useEffect(() => {
isScreenMounted.current = true
return () => isScreenMounted.current = false
},[])
const somefunction = () => {
// put this statement before every state update and you will never get that earrning
if(!isScreenMounted.current) return;
/// put here state update function
}
return null
}
export default Example;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/68724254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: TypeError: Cannot read property 'Symbol(asyncId)' of null I have a node.js based express-server which should handle some database-operations. Currently, I try to GET some information of my SQL-Database, based on a provided email. While executing the operation below, I run into a strange error.
app.get("/character/:account_email", jsonparser, (req, res) => {
database.query("SELECT account_email, character_name, creation_date, blood, money, experience_points, age, is_alive, place_name FROM vmpr_character WHERE account_email = ?", [req.params.account_email], (error, response, fields) => {
res.set({ "Content-Type": "application/json" });
if (!error) {
res.status(201);
res.write(JSON.stringify(response[0]));
database.query("SELECT attribute_name, level FROM character_attributes JOIN vmpr_character USING (account_email) WHERE character_attributes.account_email = ?", [req.params.account_email], (error2, response2, fields2) => {
if (!error2) {
res.write(JSON.stringify(response2));
} else {
res.status(502);
}
});
} else {
res.status(404);
res.write(JSON.stringify(dummy));
}
res.end();
});
});
TypeError: Cannot read property 'Symbol(asyncId)' of null
at write_ (_http_outgoing.js:636:24)
at ServerResponse.write (_http_outgoing.js:630:10)
at Query.database.query [as _callback] (/Users/nightmare/vmpr/components/database_server/NodeJS/server.js:103:10)
at Query.Sequence.end (/Users/nightmare/vmpr/components/database_server/NodeJS/node_modules/mysql/lib/protocol/sequences/Sequence.js:86:24)
at Query._handleFinalResultPacket (/Users/nightmare/vmpr/components/database_server/NodeJS/node_modules/mysql/lib/protocol/sequences/Query.js:137:8)
at Query.EofPacket (/Users/nightmare/vmpr/components/database_server/NodeJS/node_modules/mysql/lib/protocol/sequences/Query.js:121:8)
at Protocol._parsePacket (/Users/nightmare/vmpr/components/database_server/NodeJS/node_modules/mysql/lib/protocol/Protocol.js:280:23)
at Parser.write (/Users/nightmare/vmpr/components/database_server/NodeJS/node_modules/mysql/lib/protocol/Parser.js:75:12)
at Protocol.write (/Users/nightmare/vmpr/components/database_server/NodeJS/node_modules/mysql/lib/protocol/Protocol.js:39:16)
at Socket.<anonymous> (/Users/nightmare/vmpr/components/database_server/NodeJS/node_modules/mysql/lib/Connection.js:103:28)
The same error occours while POSTing some data into the same DB. The server crashes on both POST and GET, even while the data provided are still saved normally. Executing these queries in mysql directly gets me the correct results. All logs are fine.
EDIT:
* With the provided update, the error is gone, the new error is much better to handle.
* Everything is fine now. Thank you!
A: This will be fixed in node v8.2.1 which should be landing today.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/45224093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: MVC4 ASP.net - Microsoft.AspNet.WebPages.OAuth gives Runtime error I'm pretty new to ASP.NET and Visual Studio, but when I installed the Microsoft.AspNet.WebPages.OAuth package with the Nuget Package Manager the project now get a Runtime Error. I was following this answer from an earlier stackoverflow question: How to add ASP.NET Membership Provider in a Empty MVC 4 Project Template?
Anyone know why this happens?
I would like to give you some code, but not quite sure what code to provide for you. The following lines is at least been added to the config file after installing the package, might those broke down the project?
<configSections>
<section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
<sectionGroup name="dotNetOpenAuth" type="DotNetOpenAuth.Configuration.DotNetOpenAuthSection, DotNetOpenAuth.Core">
<section name="messaging" type="DotNetOpenAuth.Configuration.MessagingElement, DotNetOpenAuth.Core" requirePermission="false" allowLocation="true" />
<section name="reporting" type="DotNetOpenAuth.Configuration.ReportingElement, DotNetOpenAuth.Core" requirePermission="false" allowLocation="true" />
<section name="openid" type="DotNetOpenAuth.Configuration.OpenIdElement, DotNetOpenAuth.OpenId" requirePermission="false" allowLocation="true" />
<section name="oauth" type="DotNetOpenAuth.Configuration.OAuthElement, DotNetOpenAuth.OAuth" requirePermission="false" allowLocation="true" />
</sectionGroup>
</configSections>
In advance, thanks for the help.
Updated:
Error message: Could not load file or assembly 'System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)
A: There a at least two separate problems here.
The first is the inclusion of the automatically generated
<sectionGroup name="dotNetOpenAuth" type="DotNetOpenAuth.Configuration.DotNetOpenAuthSection, DotNetOpenAuth.Core">
<section name="messaging" type="DotNetOpenAuth.Configuration.MessagingElement, DotNetOpenAuth.Core" requirePermission="false" allowLocation="true" />
<section name="reporting" type="DotNetOpenAuth.Configuration.ReportingElement, DotNetOpenAuth.Core" requirePermission="false" allowLocation="true" />
<section name="openid" type="DotNetOpenAuth.Configuration.OpenIdElement, DotNetOpenAuth.OpenId" requirePermission="false" allowLocation="true" />
<section name="oauth" type="DotNetOpenAuth.Configuration.OAuthElement, DotNetOpenAuth.OAuth" requirePermission="false" allowLocation="true" />
</sectionGroup>
Apparently in the earlier versions of MVC this code was necessary. However, it is currently redundant and worse, in conflict with the an identical block contained in "machine.config" found in the 4.5 .NET framework, By carefully removing these blocks, in an editor outside VS2013 at least what I'm using, the this problem will be resolved.
I say an external editor because some catch 22 in VS 2103 prevents the editing , AND SAVING, of this block from the config.web sections in the Solution. I use Notepad++
The second problem may have to do with locating the references to System.Web.WebPages.Razor in your entire solution, If the version number in the not 3.0.0.0, i.e. 2.0.0.0 then update it and make sure nuget has loaded 3.0.0.0
The entire dotNetOpenAuth package has self induced problems with earlier versions of MVC solutions.
I am trying to use examples from the book ASP.NET Web API Security. It contains brilliant code but huge problems are encountered when attempting to run the Solutions that include dotNetOpenAuth code. The problems all have to do with MVC levels and dotNetOpenAuth, not the examples themselves.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/27817814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Got code 422 error when convert cURL to google app script Here is what the document says:
Use the code parameter value to make the following request to the OAuth token endpoint in the API with the authorization_code grant type:
curl --location --request POST 'https://api.deliverr.com/oauth/v1/token' \\
--header 'Content-Type: application/x-www-form-urlencoded' \\
--data-urlencode 'code={received_code_value}' \\
--data-urlencode 'grant_type=authorization_code'
And I tried to use google app script
make the code like below
function testGetToken(){
var url = "https://api.staging.deliverr.com/oauth/v1/token"
/*const payload = {
'code':'this is code',
'grant-type':'authorization_code'
};*/
var headers = {
"code": "this is code",
"grant_type": "authorization_code",
"Content-Type": "application/x-www-form-urlencoded"
};
const options = {
'method': 'POST',
'header': headers
//'payload': payload
};
var response = UrlFetchApp.fetch(url, options);
Logger.log(response.getContentText());
}
No matter I put code and grant_type to payload or header
They all return same message
Exception: Request failed for https://api.staging.deliverr.com returned code 422.
Truncated server response:
{"code":422,"message":"{"fields":{"request.grant_type":
{"message":"'grant_type' is required"}}}\n
Please refer to Deliverr API documentation...
(use muteHttpExceptions option to examine full response)
What is going on for my code? Is that urlencode problem or something else? How to make it work?
A: I believe your goal is as follows.
*
*You want to convert the following curl command to Google Apps Script.
curl --location --request POST 'https://api.deliverr.com/oauth/v1/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'code={received_code_value}' \
--data-urlencode 'grant_type=authorization_code'
*You have already confirmed that your curl command worked fine.
In this case, how about the following modification?
Modified script:
function testGetToken() {
const url = "https://api.deliverr.com/oauth/v1/token";
const payload = {
'code': '{received_code_value}',
'grant-type': 'authorization_code'
};
const options = { payload };
const response = UrlFetchApp.fetch(url, options);
Logger.log(response.getContentText());
}
*
*When payload is used with UrlFetchApp, the POST method is automatically used.
*The default content type of the request header is application/x-www-form-urlencoded.
*In your curl command, the data is sent as the form.
Note:
*
*I think that the request of the above sample script is the same as your curl command. But, if an error occurs, please confirm your '{received_code_value}' and 'authorization_code' again.
Reference:
*
*fetch(url, params)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/74523661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to write unit test for the function which is accessing aws resources? I have a function which is accessing multiple aws resources and now need to test this function, but I don't know how to mock these resources.
I have tried following github of aws-sdk-mock, but didn't get much there.
function someData(event, configuration, callback) {
// sts set-up
var sts = new AWS.STS(configuration.STS_CONFIG);
sts.assumeRole({
DurationSeconds: 3600,
RoleArn: process.env.CROSS_ACCOUNT_ROLE,
RoleSessionName: configuration.ROLE_NAME
}, function(err, data) {
if (err) {
// an error occurred
console.log(err, err.stack);
} else {
// successful response
// resolving static credential
var creds = new AWS.Credentials({
accessKeyId: data.Credentials.AccessKeyId,
secretAccessKey: data.Credentials.SecretAccessKey,
sessionToken: data.Credentials.SessionToken
});
// Query function
var dynamodb = new AWS.DynamoDB({apiVersion: configuration.API_VERSION, credentials: creds, region: configuration.REGION});
var docClient = new AWS.DynamoDB.DocumentClient({apiVersion: configuration.API_VERSION, region: configuration.REGION, endpoint: configuration.DDB_ENDPOINT, service: dynamodb });
// extract params
var ID = event.queryStringParameters.Id;
console.log('metrics of id ' + ID);
var params = {
TableName: configuration.TABLE_NAME,
ProjectionExpression: configuration.PROJECTION_ATTR,
KeyConditionExpression: '#ID = :ID',
ExpressionAttributeNames: {
'#ID': configuration.ID
},
ExpressionAttributeValues: {
':ID': ID
}
};
queryDynamoDB(params, docClient).then((response) => {
console.log('Params: ' + JSON.stringify(params));
// if the query is Successful
if( typeof(response[0]) !== 'undefined'){
response[0]['Steps'] = process.env.STEPS;
response[0]['PageName'] = process.env.STEPS_NAME;
}
console.log('The response you get', response);
var success = {
statusCode: HTTP_RESPONSE_CONSTANTS.SUCCESS.statusCode,
body: JSON.stringify(response),
headers: {
'Content-Type': 'application/json'
},
isBase64Encoded: false
};
return callback(null, success);
}, (err) => {
// return internal server error
return callback(null, HTTP_RESPONSE_CONSTANTS.BAD_REQUEST);
});
}
});
}
This is lambda function which I need to test, there are some env variable also which is being used here.
Now I tried writing Unit test for above function using aws-sdk-mock but still I am not able to figure out how to actually do it. Any help will be appreciated. Below is my test code
describe('test getMetrics', function() {
var expectedOnInvalid = HTTP_RESPONSE_CONSTANTS.BAD_REQUEST;
it('should assume role ', function(done){
var event = {
queryStringParameters : {
Id: '123456'
}
};
AWS.mock('STS', 'assumeRole', 'roleAssumed');
AWS.restore('STS');
AWS.mock('Credentials', 'credentials');
AWS.restore('Credentials');
AWS.mock('DynamoDB.DocumentClient', 'get', 'message');
AWS.mock('DynamoDB', 'describeTable', 'message');
AWS.restore('DynamoDB');
AWS.restore('DynamoDB.DocumentClient');
someData(event, configuration, (err, response) => {
expect(response).to.deep.equal(expectedOnInvalid);
done();
});
});
});
I am getting the following error :
{ MultipleValidationErrors: There were 2 validation errors:
* MissingRequiredParameter: Missing required key 'RoleArn' in params
* MissingRequiredParameter: Missing required key 'RoleSessionName' in params
A: I strongly disagree with @ttulka's answer, so I have decided to add my own as well.
Given you received an event in your Lambda function, it's very likely you'll process the event and then invoke some other service. It could be a call to S3, DynamoDB, SQS, SNS, Kinesis...you name it. What is there to be asserted at this point?
Correct arguments!
Consider the following event:
{
"data": "some-data",
"user": "some-user",
"additionalInfo": "additionalInfo"
}
Now imagine you want to invoke documentClient.put and you want to make sure that the arguments you're passing are correct. Let's also say that you DON'T want the additionalInfo attribute to be persisted, so, somewhere in your code, you'd have this to get rid of this attribute
delete event.additionalInfo
right?
You can now create a unit test to assert that the correct arguments were passed into documentClient.put, meaning the final object should look like this:
{
"data": "some-data",
"user": "some-user"
}
Your test must assert that documentClient.put was invoked with a JSON which deep equals the JSON above.
If you or any other developer now, for some reason, removes the delete event.additionalInfo line, tests will start failing.
And this is very powerful! If you make sure that your code works the way you expect, you basically don't have to worry about creating integration tests at all.
Now, if a SQS consumer Lambda expects the body of the message to contain some field, the producer Lambda should always take care of it to make sure the right arguments are being persisted in the Queue. I think by now you get the idea, right?
I always tell my colleagues that if we can create proper unit tests, we should be good to go in 95% of the cases, leaving integration tests out. Of course it's better to have both, but given the amount of time spent on creating integration tests like setting up environments, credentials, sometimes even different accounts, is not worth it. But that's just MY opinion. Both you and @ttulka are more than welcome to disagree.
Now, back to your question:
You can use Sinon to mock and assert arguments in your Lambda functions. If you need to mock a 3rd-party service (like DynamoDB, SQS, etc), you can create a mock object and replace it in your file under test using Rewire. This usually is the road I ride and it has been great so far.
A: Try setting aws-sdk module explicitly.
Project structures that don't include the aws-sdk at the top level node_modules project folder will not be properly mocked. An example of this would be installing the aws-sdk in a nested project directory. You can get around this by explicitly setting the path to a nested aws-sdk module using setSDK().
const AWSMock = require('aws-sdk-mock');
import AWS = require('aws-sdk');
AWSMock.setSDKInstance(AWS);
For more details on this : Read aws-sdk-mock documentation, they have explained it even better.
A: I see unit testing as a way to check if your domain (business) rules are met.
As far as your Lambda contains exclusively only integration of AWS services, it doesn't make much sense to write a unit test for it.
To mock all the resources means, your test will be testing only communication among those mocks - such a test has no value.
External resources mean input/output, this is what integration testing focuses on.
Write integration tests and run them as a part of your integration pipeline against real deployed resources.
A: This is how we can mock STS in nodeJs.
import { STS } from 'aws-sdk';
export default class GetCredential {
constructor(public sts: STS) { }
public async getCredentials(role: string) {
this.log.info('Retrieving credential...', { role });
const apiRole = await this.sts
.assumeRole({
RoleArn: role,
RoleSessionName: 'test-api',
})
.promise();
if (!apiRole?.Credentials) {
throw new Error(`Credentials for ${role} could not be retrieved`);
}
return apiRole.Credentials;
}
}
Mock for the above function
import { STS } from 'aws-sdk';
import CredentialRepository from './GetCredential';
const sts = new STS();
let testService: GetCredential;
beforeEach(() => {
testService = new GetCredential(sts);
});
describe('Given getCredentials has been called', () => {
it('The method returns a credential', async () => {
const credential = {
AccessKeyId: 'AccessKeyId',
SecretAccessKey: 'SecretAccessKey',
SessionToken: 'SessionToken'
};
const mockGetCredentials = jest.fn().mockReturnValue({
promise: () => Promise.resolve({ Credentials: credential }),
});
testService.sts.assumeRole = mockGetCredentials;
const result = await testService.getCredentials('fakeRole');
expect(result).toEqual(credential);
});
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/55350667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: microcrontroller output to python cgi script I bought this temperature sensor logger kit: http://quozl.netrek.org/ts/. It works great with the supplied C code, I like to use python because of its simplicity, so I wrote a script in python that displays the output from the microcontroller. I only have one temperature sensor hooked up to the kit. I want the temperature to be displayed on a web page, but can't seem to figure it out, I'm pretty sure it has something to do with the output from the micro having a \r\n DOS EOL character and linux web servers do not interpret it properly. The book I have says "Depending on the web server you are using, you might need to make configuration changes to understand how to serve CGI files." I am using debian and apache2 and basic cgi scripts work fine.
Here is my code for just displaying the sensor to the console (this works fine):
import serial
ser = serial.Serial('/dev/ttyS0', 2400)
while 1:
result = ser.readline()
if result:
print result
Here is my test.cgi script that works:
#!/usr/bin/python
print "Content-type: text/html\n"
print "<title>CGI Text</title>\n"
print "<h1>cgi works!</h1>"
Here is the cgi script I have started to display temp (doesn't work - 500 internal server error):
#!/usr/bin/python
import sys, serial
sys.stderr = sys.stdout
ser = serial.Serial('/dev/ttyS0', 2400)
print "Content-type: text/html\n"
print """
<title>Real Time Temperature</title>
<h1>Real Time Temperature:</h1>
"""
#result = ser.readline()
#if result:
print ser.readline()
If i run python rtt.cgi in the console it outputs the correct html and temperature, I know this will not be real time and that the page will have to be reloaded every time that the user wants to see the temperature, but that stuff is coming in the future.. From my apache2 error log it says:
malformed header from script. Bad header= File "/usr/lib/cgi-bin/rtt.c: rtt.cgi
A: I'm guessing that the execution context under which your CGI is running is unable to complete the read() from the serial port.
Incidentally the Python standard libraries have MUCH better ways for writing CGI scripts than what you're doing here; and even the basic string handling offers a better way to interpolate your results (assuming you code has the necessary permissions to read() them) into the HTML.
At least I'd recommend something like:
#!/usr/bin/python
import sys, serial
sys.stderr = sys.stdout
ser = serial.Serial('/dev/ttyS0', 2400)
html = """Content-type: text/html
<html><head><title>Real Time Temperature</title></head><body>
<h1>Real Time Temperature:</h1>
<p>%s</p>
</body></html>
""" % ser.readline() # should be cgi.escape(ser.readline())!
ser.close()
sys.exit(0)
Notice we just interpolate the results of ser.readline() into our string using the
% string operator. (Incidentally your HTML was missing <html>, <head>, <body>, and <p> (paragraph) tags).
There are still problems with this. For example we really should at least import cgi wrap the foreign data in that to ensure that HTML entities are properly substituted for any reserved characters, etc).
I'd suggest further reading: [Python Docs]: http://docs.python.org/library/cgi.html
A: one more time:
# Added to allow cgi-bin to execute cgi, python and perl scripts
ScriptAlias /cgi-bin/ /var/www/cgi-bin/
AddHandler cgi-script .cgi .py .pl
<Directory /var/www>
Options +Execcgi
AddHandler cgi-script .cgi .py .pl
</Directory>
A: Michael,
It looks like the issue is definitely permissions, however, you shouldn't try to make your script have the permission of /dev/ttyS0. What you will probably need to do is spawn another process where the first thing you do is change your group to the group of the /dev/ttyS0 device. On my box that's 'dialout' you're may be different.
You'll need to import the os package, look in the docs for the Process Parameters, on that page you will find some functions that allow you to change your ownership. You will also need to use one of the functions in Process Management also in the os package, these functions spawn processes, but you will need to choose one that will return the data from the spawned process. The subprocess package may be better for this.
The reason you need to spawn another process is that the CGI script need to run under the Apache process and the spawn process needs to access the serial port.
If I get a chance in the next few days I'll try to put something together for you, but give it a try, don't wait for me.
Also one other thing all HTTP headers need to end in two CRLF sequences. So your header needs to be:
print "Content-type: text/html\r\n\r\n"
If you don't do this your browser may not know when the header ends and the entity data begins. Read RFC-2616
~Carl
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/1291624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Select top N after aggregating by key and another field in pyspark RDD assuming a data set like this:
(teamId, player, minutesPlayed)
(1, 'A', 33)
(1, 'A', 12)
(1, 'B', 5)
(2, 'C', 22)
(2, 'C', 15)
(2, 'C', 33)
(2, 'D', 0)
......
So every tuple represents how many minutes player played in a game per team.
So I would like to aggregate data per player per team and select top 10 most playing players per team. Assuming a much larger data set of course.
So let's assume team 1 has 15 players, we want to get top 10 by minutesPlayed
Resulting data set would be:
(1, 'A', 350)
(1, 'B', 330)
#... rest 8 players of team 1
(2, 'C', 500)
(2, 'D', 330)
(2, 'E', 250)
#... rest 7 players of team 2
#.... rest of team with 10 players with most minutes
def map_players(data):
teamId = data[0],
player = data[1],
minutesPlayed = data[2]
return ((teamId, player), minutesPlayed) # returning the combination of team and player (assuming this is the way to do it)
def reduce_players(p1, p2):
# really not sure what to do here
# p1 and p2 are just then minutes played (int)
result:
player_data_set.map(map_players).reduceByKey(reduce_players).collect() # take(10)?
(1, 'A', 350)
(1, 'B', 330)
#... rest 8 players of team 1
(2, 'C', 500)
(2, 'D', 330)
(2, 'E', 250)
#... rest 7 players of team 2
#.... rest of team with 10 players with most minutes
I would like to do everything within reduceByKey reducer and use only .map method for mapping.
A: If you convert your data into a pyspark Dataframe you could do it like this:
from pyspark.sql import functions as F, Window
(
df
.groupBy('teamId', 'player')
.agg(F.sum('minutesPlayed').alias('minutesPlayedTotal'))
.withColumn('rank', F.row_number().over(Window.partitionBy('teamId').orderBy(F.desc('minutesPlayedTotal')))
.where('rank <= 10')
.show()
)
A: If you want to use RDD then you can do something like this:
*
*reduce using teamId + player as key to calculate the total minutes played by each player
*reduce using this time only teamId as key to get the list of players with their count of minutes played for each team
*flatmap values to sort the list of (player, count) on descending order and take first 10 values
Example:
from operator import add
rdd = spark.sparkContext.parallelize([
(1, 'A', 33), (1, 'A', 12), (1, 'B', 5), (2, 'C', 22),
(2, 'C', 15), (2, 'C', 33), (2, 'D', 0)
])
rdd1 = rdd.map(lambda x: ((x[0], x[1]), x[2])) \
.reduceByKey(add) \
.map(lambda x: (x[0][0], [(x[0][1], x[1])])) \
.reduceByKey(add)\
.flatMapValues(lambda x: sorted(x, key=lambda a: a[1], reverse=True)[:10])
for p in rdd1.collect():
print(p)
#(1, ('A', 45))
#(1, ('B', 5))
#(2, ('C', 70))
#(2, ('D', 0))
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/70710059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Updating state in a class based component and re-rendering it first i am defining a state in class and rendering it using props, after that I am trying to update the state on clicking a button and on adding an event handler to the button but
when I am trying to console log the updated state after the nameChangeHandler and I am not able to see the updated state but I can see the previous state though
this is my App.js file
import React, { Component } from 'react';
import Person from './Person/Person';
import './App.css';
class App extends Component {
state = {
persons: [
{ name: 'sagar', age: 22 },
{ name: 'nitin', age: 18 },
{ name: 'ankita', age: 21 },
],
otherData: 'some other value',
};
changeNameHandler = () => {
this.setState({
persons: [
{ name: 'sagar', age: 22 },
{ name: 'nitin', age: 18 },
{ name: 'ankita-nanda', age: 91 },
],
});
console.log(this.state);
};
render() {
return (
<div className='App'>
<h1>Hello world!!!!</h1>
<Person
name={this.state.persons[0].name}
age={this.state.persons[0].age}
/>
<Person
name={this.state.persons[1].name}
age={this.state.persons[1].age}
>
i'm going to college
</Person>
<Person
name={this.state.persons[2].name}
age={this.state.persons[2].age}
/>
<button onClick={this.changeNameHandler}>Switch Names</button>
</div>
);
// return React.createElement(
// 'div',
// { className: 'App' },
// React.createElement('h1', null, 'hope this works!!')
// );
}
}
export default App;
and this is my Person.js file
import React from 'react';
const person = (props) => {
return (
<div>
<p>
Hey, my name is {props.name} and I'm {props.age} yeras old!
</p>
<p>
{props.children}--{props.name}
</p>
</div>
);
};
export default person;
this is the screenshot of the image
A: setState is async so if you try to console log after your click event the same state will be there.
If you want to check the updated state you can try to console.log it above render.
render() {
console.log(this.state)
return (
<div className='App'>
<h1>Hello world!!!!</h1>
<Person
name={this.state.persons[0].name}
age={this.state.persons[0].age}
/>
<Person
name={this.state.persons[1].name}
age={this.state.persons[1].age}
>
i'm going to college
</Person>
<Person
name={this.state.persons[2].name}
age={this.state.persons[2].age}
/>
<button onClick={this.changeNameHandler}>Switch Names</button>
</div>
);
// return React.createElement(
// 'div',
// { className: 'App' },
// React.createElement('h1', null, 'hope this works!!')
// );
}
A: try this:
changeNameHandler = () => {
this.setState({
persons: [
{ name: 'sagar', age: 22 },
{ name: 'nitin', age: 18 },
{ name: 'ankita-nanda', age: 91 }
]
});
setTimeout(() => {
console.log(this.state);
}, 0);
};
setState() is async function, so, it take some fraction of time to run and next line of code is executed.
As state is not updated yet, you will get the previous state.
setTimeOut() does the trick and runs after state updated.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/63763645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.