text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Why do you need the + between variables in javascript?
Why does this line work
$('#body-image').css("background-image", 'url('+ backgroundimage +')');
but not this one
$('#body-image').css("background-image", 'url('backgroundimage')');
or this one
$('#body-image').css("background-image", 'url(backgroundimage)');
This is not related to jQuery, but just to basic JavaScript (so jQuery in tag and title is inappropriate). It's in this particular case the string concatenation operator.
Look at the syntax highlighting that SO provides. It's pretty telling.
Thanks for all the comments. I think I get it. You have to concatenate the variable into an existing string even though its undefined. Makes sense, and yeah I understand why the last example doesn't work now. Very helpful comments.
backgroundimage is a JavaScript variable. The concatenation operator in JavaScript is +, so to put a string together with a variable, you do 'some string ' + someVariable. Without the +'s, JavaScript wouldn't know what to do with your variable (and in your third example, wouldn't even know that it was a variable).
You need to concat the string with the variable backgroundimage. So you use "+" for this.
That's why this doesn't work.
$('#body-image').css("background-image", 'url('backgroundimage')');
And the secont doesn't work because there is no image called 'backgroundimage'.
$('#body-image').css("background-image", 'url(backgroundimage)');
Because you are building a string. You are missing the line where backgroundimage gets a value:
var backgroundimage = "someimage.gif";
$('#body-image').css("background-image", 'url('+ backgroundimage +')');
becomes:
$('#body-image').css("background-image", 'url(someimage.gif)');
it's concatenating the string.
let's say backgroundimage is 'foo.jpg, then
'url('+backgroundimage+')' = 'url(foo.jpg)'
In JavaScript, a string literal (i.e., "I am a string") is actually treated like a String object (though, strictly speaking, it isn't - see the MDC documentation - but we can ignore the difference at this level). The following two lines are equivalent:
var letters = "ABC", numbers = "123";
var letters = new String("ABC"), numbers = new String("123");
Strings are concatenated using either the + operator or the String.concat method, either of which join 2 or more strings in a left-to-right order and return the result. So in order to get "ABC123", we can do any of the following:
"ABC" + "123"
"ABC" + numbers
letters + "123"
letters + numbers
"ABC".concat("123")
"ABC".concat(numbers)
letters.concat("123")
letters.concat(numbers)
but not:
letters"123"
"ABC"numbers
lettersnumbers
"lettersnumbers"
which are all, effectively, the same thing that you were trying to do in your examples.
| common-pile/stackexchange_filtered |
How to make Delphi Prism indexed properties visible to C# when properties are not default
I have several Delphi Prism classes with indexed properties that I use a lot on my C# web applications (we are migrating a big Delphi Win32 system to ASP.Net). My problem is that it seems that C# can't see the indexed properties if they aren't the default properties of their classes. Maybe I'm doing something wrong, but I'm completely lost.
I know that this question looks a lot like a bug report, but I need to know if someone else knows how to solve this before I report a bug.
If I have a class like this:
TMyClass = public class
private
...
method get_IndexedBool(index: Integer): boolean;
method set_IndexedBool(index: Integer; value: boolean);
public
property IndexedBool[index: Integer]: boolean
read get_IndexedBool
write set_IndexedBool; default; // make IndexedBool the default property
end;
I can use this class in C# like this:
var myObj = new TMyClass();
myObj[0] = true;
However, if TMyClass is defined like this:
TMyClass = public class
private
...
method get_IndexedBool(index: Integer): boolean;
method set_IndexedBool(index: Integer; value: boolean);
public
property IndexedBool[index: Integer]: boolean
read get_IndexedBool
write set_IndexedBool; // IndexedBool is not the default property anymore
end;
Then the IndexedBool property becomes invisible in C#. The only way I can use it is doing this:
var myObj = new TMyClass();
myObj.set_IndexedBool(0, true);
I don't know if I'm missing something, but I can't see the IndexedBool property if I remove the default in the property declaration. Besides that, I'm pretty sure that it is wrong to have direct access to a private method of a class instance.
Any ideas?
I believe that C# 4.0 will support indexed properties but that anything before that will sadly not.
Unfortunately, what you are asking for is a limitation of C# and Not Delphi Prism. From the Delphi Prism Documentation wiki page on Delphi Prism vs C#:
C# can only access the default indexed properties. In Delphi Prism, you can define and use other indexed properties using their name.
This page also outlines other areas where Delphi Prism code includes unique or extended features over C# which might be useful in your port.
Yes, I use that to implement indexed properties on some C# classes that are used on my applications, but the problem is with my Delphi Prism classes. I thought that it was a Delphi Prism limitation, but as it works on VB then I think C# is the one to blame.
Yes, It is C# limitation rather than a Delphi Prism limitation. You're having to hack around the C# support rather than the Prism support.
Updated my answer to reflect.
I should have looked at the wiki page before asking here. Thank you for your answer.
It's unfortunate but that's the only way C# lets you access index properties. It's a C# compiler limitation (vb.net should do it fine).
Not beautiful or clean, but as a fast workaround you could make the indexed property accessors public. That way you can use them from C# to access the values.
| common-pile/stackexchange_filtered |
how do I send my date from datepickerfragment to another fragment
I am new to android development. learning through big nerd ranch android 4e book this is an example from there, methods in the book are depreciated, so how do I sent data of date to another fragment
They are using targetfragment technique which is depreciated, so I am now stuck at this problem
This is my datepickerfragment.kt file:
private const val ARG_DATE = "date"
private const val RESULT_DATE_KEY = "resultDate"
private const val ARG_REQUEST_CODE = "requestCode"
class DatePickerFragment : DialogFragment() {
override fun onCreateDialog(savedInstanceState: Bundle?): Dialog {
val dateListener = DatePickerDialog.OnDateSetListener { _: DatePicker, year: Int, month: Int, day: Int ->
val resultDate: Date = GregorianCalendar(year, month, day).time
val result = Bundle().apply {
putSerializable(RESULT_DATE_KEY, resultDate)
}
val resultRequestCode = requireArguments().getString(ARG_REQUEST_CODE, "")
}
val date = arguments?.getSerializable(ARG_DATE) as Date
val calendar = Calendar.getInstance()
calendar.time = date
val initialYear = calendar.get(Calendar.YEAR)
val initialMonth = calendar.get(Calendar.MONTH)
val initialDay = calendar.get(Calendar.DAY_OF_MONTH)
return DatePickerDialog(
requireContext(),
dateListener,
initialYear,
initialMonth,
initialDay
)
}
interface Callbacks {
fun onDateSelected(date: Date)
}
companion object {
fun getSelectedDate(result: Bundle) = result.getSerializable(RESULT_DATE_KEY) as Date
fun newInstance(date: Date, requestCode: String): DatePickerFragment {
val args = Bundle().apply {
putSerializable(ARG_DATE, date)
putString(ARG_REQUEST_CODE, requestCode)
}
return DatePickerFragment().apply {
arguments = args
}
}
}
}
This is my crimefragment.kt class:
private const val ARG_CRIME_ID = "crime_id"
private const val TAG = "CrimeFragment"
private const val REQUEST_DATE = "DialogDate"
class CrimeFragment : Fragment(), DatePickerFragment.Callbacks, FragmentResultListener {
private lateinit var crime: Crime
private lateinit var titleField: EditText
private lateinit var dateButton: Button
private lateinit var solvedCheckBox: CheckBox
private val crimeDetailViewModel: CrimeDetailViewModel by lazy {
ViewModelProvider(this).get(CrimeDetailViewModel::class.java)
}
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
crime = Crime()
val crimeId: UUID = arguments?.getSerializable(ARG_CRIME_ID) as UUID
Log.d(TAG, "args bundle crime Id:$crimeId")
crimeDetailViewModel.loadCrime(crimeId)
}
override fun onCreateView(
inflater: LayoutInflater,
container: ViewGroup?,
savedInstanceState: Bundle?
): View? {
val view = inflater.inflate(R.layout.fragment_crime, container, false)
titleField = view.findViewById(R.id.crime_title) as EditText
solvedCheckBox = view.findViewById(R.id.crime_solved1) as CheckBox
dateButton = view.findViewById(R.id.crime_date)
return view
}
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
crimeDetailViewModel.crimeLiveData.observe(
viewLifecycleOwner,
Observer { crime ->
crime?.let {
this.crime = crime
updateUI()
}
}
)
childFragmentManager.setFragmentResultListener(REQUEST_DATE, viewLifecycleOwner, this)
}
override fun onStart() {
super.onStart()
val titleWatcher = object : TextWatcher {
override fun beforeTextChanged(s: CharSequence?, start: Int, count: Int, after: Int) {
}
override fun onTextChanged(s: CharSequence?, start: Int, before: Int, count: Int) {
crime.title = s.toString()
}
override fun afterTextChanged(s: Editable?) {
}
}
titleField.addTextChangedListener(titleWatcher)
dateButton.setOnClickListener {
DatePickerFragment
.newInstance(crime.date, REQUEST_DATE)
.show(childFragmentManager, REQUEST_DATE)
}
solvedCheckBox.apply {
setOnCheckedChangeListener { _, isChecked ->
crime.isSolved = isChecked
}
}
}
private fun updateUI() {
titleField.setText(crime.title)
dateButton.text = crime.date.toString()
solvedCheckBox.apply {
isChecked = crime.isSolved
jumpDrawablesToCurrentState()
}
}
companion object {
fun newInstance(crimeId: UUID): CrimeFragment {
val args = Bundle().apply {
putSerializable(ARG_CRIME_ID, crimeId)
}
return CrimeFragment().apply { arguments = args }
}
}
override fun onStop() {
super.onStop()
crimeDetailViewModel.saveCrime(crime)
}
override fun onDateSelected(date: Date) {
crime.date = date
}
override fun onFragmentResult(requestCode: String, result: Bundle) {
when (requestCode) {
REQUEST_DATE -> {
Log.d(TAG, "received result for $requestCode")
crime.date = DatePickerFragment.getSelectedDate(result)
updateUI()
}
}
}
}
You seem to be using Fragment Result API, which requires two steps. first you setFragmentResultListener in the fragment where you want to receive result (CrimeFragment in your case), second you call setFragmentResult from the fragment that produces the result. You are missing the second step. to solve this, update OnDateSetListener as
val dateListener = DatePickerDialog.OnDateSetListener { _: DatePicker, year: Int, month: Int, day: Int ->
val resultDate: Date = GregorianCalendar(year, month, day).time
val result = Bundle().apply {
putSerializable("DATE", resultDate)
}
// Set the fragment result, this will invoke the `onFragmentResult` of CrimeFragment
parentFragmentManager.setFragmentResult("requestKey", result)
}
Now in CrimeFragment register FragmentResultListener as
// use same value of requestKey as specified in setFragmentResult
childFragmentManager.setFragmentResultListener("requestKey", viewLifecycleOwner, this)
Now update onFragmentResult to
override fun onFragmentResult(requestKey: String, result: Bundle) {
when(requestKey){
"requestKey" -> {
// get date from the result bundle
val date = result.getString("DATE")
// do something with date
}
}
}
Apart from this there is another approach that you can take, which is to use a shared ViewModel, since both the fragments share the activity, you can get the ViewModel associated with activity.
get ViewModel in CrimeFragment and DatePickerFragment as
// Get view model associated with activity
private val crimeDetailViewModel: CrimeDetailViewModel by activityViewModels()
Now in OnDateSetListener of DatePickerFragment simply store the date in some ViewModel property
crimeDetailViewModel.date = // updated date from OnDateSetListener
after this you can access the crimeDetailViewModel.date inside CrimeFragment
Done and also I did with ViewModel in between posting and your response time. thank you very much
If you use the navigation component from the jetpack components you can use the getPreviousBackStackEntry() to populate, from your picker dialog, the launcher fragment's bundle with the result of the picker, then when you return to your launcher fragment you read the result from the current backstack entry using getCurrentBackStackEntry.
You can reference this video for examples
If you do not use the navigation component, then you could use a shared viewmodel scoped to the Activity. You would get a reference to this shared viewmodel in both of your fragments. Then the date picker would populate some field in your shared viewmodel and the launcher fragment would read that field when it gets resumed.
thanks, but I am new and I will use the navigation component in the future. thanks for your response
One way is to use events, busevent or rx.
| common-pile/stackexchange_filtered |
Can I test code contained in module files?
Generally a module file contains hooks and doesn't have any class nor namespaces. What I have studied so far is that PHP Unit testing is possible only when the code is contained in classes.
Can I write unit tests for module files?
See: https://drupal.stackexchange.com/a/267355/28265
Unit-testing procedural code
In principle, you can unit test procedural code just as easily as code that is in a class, but you can't really mock it. Therefore, whether your tested code is in a class doesn't matter so much as whether its dependencies are.
Even though procedural code cannot use dependency injection and has to access services via \Drupal::service(), you can put your mocks into a custom container and call \Drupal::setContainer():
$container = new ContainerBuilder();
// ... insert mocked services
\Drupal::setContainer($container);
That means if you call only class code like \Drupal::logger() or \Drupal\user\Entity\User::load() or even \Drupal::database(), you should be able to mock everything, load your .module file, then call all of the hooks and check that they return the correct values.
But if your code references procedural core constants or functions (eg REQUEST_TIME, db_query(), drupal_set_message(), watchdog_exception(), file_*(), user_load()) then those .inc/.module files must be included as well, along with any files used by that code. You'll quickly run into problems that way, as that code will assume it's running in a full Drupal instance. To test such code, a kernel test or functional (ie browser) test will likely be required.
Unit-testing Hooks
With hooks, there is an extra caveat for unit-testing: You are declaring a function that you expect to be called from elsewhere, and which often doesn't do much other than altering and returning arrays.
You can test that as a unit, but because the contract of that code is so vaguely defined, it generally won't tell you much. Most of the errors in such code will only be revealed by testing the results in a functional test.
| common-pile/stackexchange_filtered |
If there exists functions $f, g$ such that $f(g(x))=x$ can we say that $g(f(x))=x)$ as well?
If there exists functions $f, g$ such that $$f(g(x))=x$$ can we say that $g(f(x))=x$ as well? I don't know but it apparently appears to hold true.
Just to clarify, my doubt lies in the fact whether it ALWAYS HOLDS or not.
Assume that $x\in \mathbb{R}$.
I am not specifying the domain and codomain of the Functions anymore as that may lead to some case wise discussion of the particular matter.
See also this question
Tangent and it's inverse serve as counterexamples.
That is not true in general. Choose $g(x)$ some function that is injective but not surjective. Choose some $f(x)$ such that $f(g(x))=x$; this is possible since $g$ is injective.
So the image of $g$ is not the entire domain, thus it is impossible that $g(f(x))=x$ for all $x$ in the domain.
However, your claim does hold if the domain is finite; This is because $f(g(x))=x$ implies $f$ surjective, $g$ injective. But over a finite domain this implies that $f$,$g$ are bijections, and from $f\circ{}g=id$ we have
$g\circ{}f\circ{}g=g\circ{}id=g$
taking inverse from the right, we get $g\circ{}f=id$.
A concrete example:
choose $g:\mathbb{R}\to\mathbb{R}$ as $g(x)=e^x$.
choose $f:\mathbb{R}\to\mathbb{R}$ as $f(x)=\ln(x)$ for all $x>0$, and $f(x)=42$ for all $x\le{}0$.
Then $g(x)>0$ for all $x\in\mathbb{R}$. So $f(g(x))=\ln(e^x)=x$ for all $x\in\mathbb{R}$.
On the other hand, $g(f(-1))=g(42)=e^{42}\ne{}-1$
@Mathbg $y=e^x\implies x=\ln y\implies f(x)=\ln x$
@Holo I think you've answered in someone else's answer. And please check, your answer isn't valid I believe.
@Mathbg I was answer your comment on this answer("how to find $f(x)$), idk why my answer got downvote, but read about inverse functions, this is exactly what you are looking for
@Holo I didn't give the downvote, though I understand there's a misconception between us. I know it holds for inverse functions but my point was: "DOES IT ALWAYS HOLD FOR ANY FUNCTION?"
@idok, can you please choose some possible $f,g$ as an example?
@Mathbg Yes, I added an example.
Let $g:\Bbb R \to \Bbb R^2$ be the inclusion map and $f : \Bbb R^2 \to \Bbb R$ be the projection map. Then, $f(g(x)) = f(x,0) = x$, while $g(f(x,y)) = g(x) = (x,0) \ne (x,y)$.
Not all functions are like this, only invertible function:
$f(x)=\ln x,g(x)=e^x$
$f(x)=x^3, g(x)=\sqrt[3]x$
$f(x)=\tan(x), g(x)=\arctan(x)$(this one is only over the domain $(-\pi/2,\pi/2)$)
And much more. An easy counter example is $f(x)=x^2$, to this function every(almost every) value of $f(x)$ has 2 $x$ that give it, for example $x^2=4\implies x=\pm2$, because a function can by definition had only one value it means that there is no $g(x)$ that will have $f(g(x))=g(f(x))=x$ but $\sqrt{x}^2=x$(and $\sqrt{x^2}=|x|\ne x$). Another counter example is $f(x)=\tan(x), g(x)=\arctan(x)$ over larger domain: $\tan(\arctan(x))=x$ while $\arctan(\tan(x))$ is not equal to $x$ if $x$ is outside of $(-\pi/2,\pi/2)$
Can you please explain something more? Take the first example, $g(f(x))=e^{\ln x}$
@Mathbg $e^x$ and $\ln x$ are inverse functions, means that $e^x=y\implies \ln y=x$, now if so we have $e^{\ln(y)}=e^x=y$ now we take the opposite direction: $\ln(e^x)=\ln y=x$, because $x$ and $y$ are arbitrary this implies that $\ln(e^x)=e^{\ln(x)}=x$
I think there's some miscommunication between us. I've edited my doubt for clarity
@Mathbg I edit my answer to fit the question, sorry I misunderstood the question
Great, I can upvote and neutralize the points. (I can't communicate with whoever has downvoted anyway)
@Mathbg I added another example that doesn't involve looking at $\sqrt•$ of negative numbers
| common-pile/stackexchange_filtered |
Log4j2 Always writing to the same file after rollback
I am trying to set up the log so that it will rotate every minute. The date and timestamp works, but once it fires the rollover, the new entry will be written in the previous minute log file. i.e. it did not create a new log file in the next minute.
For example. In the first minute, entries are written to A2018-11-27 11:50.csv
On the next minute, it still writes to A2018-11-27 11:50.csv even though it has already created a rollover archive called 2018-11-27 11:50.csv.gz. It should create a new log file A2018-11-27 11:51.csv.
Any suggestion?
log4j2.xml
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="debug" monitorInterval="30">
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d %-5p [%t] %C{2} (%F:%L) - %m%n"/>
</Console>
<Appenders>
<RollingFile name="HR0" fileName="../logs/m/A${date:yyyy-MM-dd hh:mm}.csv" filePattern="../logs/m/AAA ${date:yyyy-MM-dd hh:mm}.csv">
<CronTriggeringPolicy schedule="0 * * * * ?" />
</RollingFile>
</Appenders>
<Loggers>
<Root level="info">
<AppenderRef ref="Console"/>
</Root>
<Logger name="HR0" additivity="false" level="info">
<AppenderRef ref="HR0" />
</Logger>
</Configuration>
someJavafile.java
public class someJavafile {
private final Logger itsLoggerHR0 = LogManager.getLogger("HR0");
itsLoggerHR0.info("AAA");
}
I manage to figure it out based on this jira ticket from log4j2:
https://issues.apache.org/jira/browse/LOG4J2-1185
I will post my working solution here. I am using log4j2 2.11.1
The fix is to remove "fileName" and use %d instead of $ in your the filePattern
<RollingFile name="HR0" filePattern="../logs/measure/%d{yyyy-MM-dd hh:mm}.csv">
<CronTriggeringPolicy schedule="0 * * * * ?" />
</RollingFile>
| common-pile/stackexchange_filtered |
How to add vertical scrollbar to Jpanel which is having SpringLayout in Java?
I have below snippet of code in which TestClass is extending jPanel which is using SpringLayout, in which I'm not able to add vertical scrollbar.
Could you please help me guys?
public class TestClass extends JPanel {
private SpringLayout layout;
private Spring s, sprWest;
private JComboBox comboDevice = new JComboBox();
private JComboBox comboCommand = new JComboBox();
private JLabel lblDevice = new JLabel("Select the Device:");
private JLabel lblCommand = new JLabel("Select Command:");
private JButton btnCommand = new JButton("Save");
public TestClass () {
layout = new SpringLayout();
s = Spring.constant(0, 60, 60);
setLayout(layout);
}
public void populateFields(){
add(lblCommand);
add(comboCommand);
sprLblEast = Spring.sum(s, Spring.max(layout.getConstraints(lblCommand).getWidth(), layout.getConstraints(lblDevice).getWidth()));
Spring strut = Spring.constant(10);
layout.putConstraint(SpringLayout.NORTH, lblCommand, strut, SpringLayout.SOUTH, comboDevice);
layout.putConstraint(SpringLayout.NORTH, comboCommand, strut, SpringLayout.SOUTH, comboDevice);
layout.putConstraint(SpringLayout.EAST, lblCommand, sprLblEast, SpringLayout.WEST, this);
layout.putConstraint(SpringLayout.WEST, comboCommand, Spring.sum(s, layout.getConstraints(lblCommand).getWidth()), SpringLayout.WEST, lblCommand);
layout.putConstraint(SpringLayout.EAST, this, sprLblEast, SpringLayout.WEST, comboCommand);
List cmdList = getCommandList();
for (int index = 0; index < cmdList.size(); index++) {
comboCommand.addItem(cmdName);
}
validate();
repaint();
}
}
I would sugest use of ScrollPane
ScrollPane mainWindow = new ScrollPane(ScrollPane.SCROLLBARS_AS_NEEDED);
mainWindow.add(this);
but You would have to use another object than Your TestClass as view, here it would be mainWindow.
You could use also JScrollPane from Swing: tutorial for JScrollPane
| common-pile/stackexchange_filtered |
Trying to position text specifically with CSS, and allow it to move with the page
<!DOCTYPE html>
<html>
<head>
<style type="text/css">
p.titletext
{
font-family:"arial";
font-size:50px;
position:relative;
left:425px;
top:10px;
}
</style>
</head>
<body>
<p class="titletext">Hello World.</p>
</body>
</html>
I'm asking whether there's anything that can handle text positioning that will actually keep the text in its position while the page is being expanded/contracted. In other words, I want the text positioning to following according to the page size. For example, if I make a new paragraph and align it to the center, no matter how large or small the window is, the text will always stay in the center. Is there a way to accomplish this while setting the text position to your liking?
this can be done by text-align:center and top:50% in p.titletext and remove position:relative so it will always on center
this method will not work for element that have display:inline
@Somebodyisintrouble sir, he aint say that he is using display:inline
| common-pile/stackexchange_filtered |
Find largest number divided by which each element of vector is integer
This is my first question on math.stackexchange, and as you will notice, I am not a mathematician at all, and this may be a very simple question. Apologies.
I also don't know if I used the right terms when asking the title, so here an example of what I am looking for (the c() is from R, it concatenates to a vector, in this case of numerics or integers)
I am looking for a generalized way to find the largest number divided by which each element of a vector would result in an integer.
c(1, 3, 5) # expected result = 1
c(0.2, 0.4, 0.6) # expected result = 0.2
c(0.3, 0.5, 0.7) # expected result = ??? this is where I am stuck
Divide by $0.1$.
If you convert the numbers into fractions, it can easier be described what to do exactly in general. Are you familiar with $\gcd$ and $lcm$ ?
@DietrichBurde ok yes, this works in this example, thanks, but is there a general way?
@Peter I see the reasoning behind it, but this is not ideal because in the end I want to use it for R and it doesn't really use fractions. Maybe I should indeed ask this in stackoverflow instead, but I though it was more a mathematical problem
@Peter no, I am not, unfortunately :(
OK, I try to formulate a solution avoiding that. For all numbers, count the number of digits after the decimal point. Assume $m$ is the maximum that occurs. Dividing by $10^{-m}$ gives integers. To find the best number you have to divide by the largest common divisor of the resulting numbers. For example : $3,6,9$ have $3$ as the largest common divisor. Multiply $10^{-m}$ with this number to get the final result. To get the number in the second step, you can for example determine the divisors of the first integer and check them. If zeros occur, you can ignore them.
I hope this was helpful.
It absolutely is. I will check and try to implement it, but I am sure you should already post this as an answer!!!
@Peter Merajul has posted the very exact solution, but like 20 minutes later - I would be happily accept your answer if you would post it. Let me know
@Tjebo I have enough points. It is OK, if you accept the answer below.
Thanks! And thanks again to both of you, I appreciate your help
I don't know about linear algebra that much too, but I think I can answer this question.
You have a vector $V$. Let's denote $V[i]$ as i'th element of $V$ and $L$ as the length of $V$. Now, you want to find a number let's say $n$. Now if you divide each $V[i]$ with n every element of $V$ would become an integer. Now, the simplest way that I can think of would be taking $n = 10^{-d}$ where you can set $d$ by examining $V$. Now let's denote a function $f(x)$ which returns the number of digits $n$ has after the decimal point. Some examples would be, $f(1.512) = 3$ because there are 3 digits after the decimal point in $1.512$ . Now we have to find the maximum value of $f(V[i])$ for $i \in [1, L] $. Now $d$ has to be equal to that maximum value.
Let's take the last example that you mentioned in your question. $V(0.3, 0.5, 0.7)$.
$f(0.3) = 1$
$f(0.5) = 1$
$f(0.7) = 1$
So, the maximum value of those three values is $1$. So $d = 1$ and $n = 10^{-1}$. Now we have to each element of $V$ by $n$. After that we get $V' = (3, 5, 7)$.
Now you wanted the largest $n$. So, we got to take another step. We have to multiply $n$ by the $gcd(V')$. where $gcd$ means $Greatest\ Common\ Divsior$ and $gcd(V')$ means the $gcd$ of all the elements of $V'$. Now you should have the greatest $n$ by which if the elements of $V$ are divided they would result in an integer.
That's a very nice solution, thanks for your help. Peter had posted the exact same solution as a comment rouhgly 20 minutes before you (not much time, you must have had the same idea :) However, if Peter decides to post this as an answer, I would accept this.
Yeah, I saw Peter's solution after I finished writing my one. He and I had the exact idea I think. But I think he was faster. So, I also think you should accept his one. And you got your answer that's all that matters.
| common-pile/stackexchange_filtered |
converting list to an array of fixed size
Could anyone please explain me why this code snippet works?
Object[] op = new Object[0];
ArrayList r = new ArrayList();
r.add("1");
r.add("3");
r.add("5");
r.add("6");
r.add("8");
r.add("10");
op = r.toArray();
System.out.println(op[3]);
This prints out 6. I know that you can convert list to array but I was thinking that if the array is fixed size then you can't add further elements. In this case the array op is with fixed size "0" so why/how are the list elements being added to the array? Thanks
You need to distinguish between the reference to your array object (that is Object[] op) and the actual array object to which the reference points.
With
Object[] op = new Object[0];
you are creating an array of size 0 and assign it to the op reference.
But then, with
op = r.toArray();
you are assigning a new array object to the op reference. This new array object has been created by the toArray() method with the appropriate size.
The earlier array object which was created with new Object[0]; is now dangling and subject to garbage collection.
oh yes. I totally forgot that the array variables are also reference variables. Thanks
You misunderstood one important thing here.
Java identifiers are only pointers to objects, not objects themselves.
Here when you do
Object[] op = new Object[0];
you create a new instance array with a fixed size of 0, and you point the identifier "op" to it.
But when you later do
op = r.toArray();
you just overwrite where your former identifier point to. You lose the reference to your first array that will be garbaged collected.
"op" desgin now a new array, your former one just disappear.
For the same reason that this code prints out X instead of ABC:
String s = "ABC";
String t = "XYZ";
s = t.substring(0, 1);
System.out.println(s);
You're reassigning the value of op, and the new value has nothing to do with the old value.
| common-pile/stackexchange_filtered |
The transformer core is running much hotter than our calculations predicted, Michael. The hysteresis losses should be manageable at this frequency.
That's because you're using the static loop data, Steven. The actual loop area changes dramatically when you drive the material at higher frequencies.
But hysteresis is hysteresis, isn't it? The coercive field and saturation should remain the same regardless of how fast we cycle the field.
Not in conducting materials like this silicon steel. The changing magnetic field induces eddy currents that create their own opposing field. The faster you change the applied field, the stronger these eddy currents become.
So the effective field inside the material lags behind the applied field?
Exactly. And that lag creates additional area inside the hysteresis loop. The material appears to have higher coercivity and increased losses, but it's really the eddy currents fighting the field changes.
This explains why ferrite cores work better at high frequencies - they're electrical insulators, so no eddy currents.
Right, but there's something deeper here. Even in ferrites, we see rate dependence from domain wall dynamics. The walls can't keep up with rapid field changes, creating viscous-like behavior.
I've noticed that if I plot our measurement data with a modified field parameter that accounts for both the instantaneous field and its time derivative, the curves from different frequencies collapse onto a single master curve.
That suggests there's a fundamental scaling relationship. The rate dependence isn't just a parasitic effect - it reveals the underlying relaxation mechanisms in the material.
The differential susceptibility must include both irreversible domain switching and reversible domain wall bending, each with their own time constants.
And the total energy dissipated per cycle scales with the enclosed loop area. So by understanding the rate dependence, we can predict the power losses and thermal behavior across the entire frequency spectrum.
This means we need to redesign our core geometry to minimize eddy current paths while optimizing the domain structure for the operating frequency range. | sci-datasets/scilogues |
C++ function returning 'inf' instead of double
I have the following simple code that computes the nth harmonic number. No matter what I try I keep getting an 'inf' value in the output. How is this possible, even if all my variables are doubles?
#include <cstdlib>
#include <iostream>
using namespace std;
double harmonic(double n){
double h = 0.0;
while(n >= 0){
h = h + (1.0/n);
n = n-1.0;
}
return(h);
}
int main(int argc, char** argv) {
double n;
cout << "enter an integer: ";
cin >> n;
cout << "The " << n << "th harmonic number is: ";
cout << harmonic(n) << endl;
return 0;
}
inf is a value of type double, just like 1.0 is.
If you stepped through this in a debugger, it might become obvious what is happening (debugging is a critical programming skill).
@MarkRansom Silly, inf is a floating point value, but in double it is pronounced infinf.
Curious, why not use int for n? you're treating it as an integer.
Think about this:
while(n >= 0){
h = h + (1.0/n);
n = n-1.0;
}
Say I passed in n = 0.0. The loop will execute, yet n = 0 and hence you are performing a division by zero.
inf is a special floating point value, arising, for example, from division over zero. The latter indeed happens in your program: when n reaches zero, your loop still continues and you try to divide 1.0 over zero.
Change your loop to while (n>0).
n could still be greater than zero and 1/n would still produce inf for sufficiently small n.
@sjdowling, good point, though I really do think OP intended n to be int, not double; otherwise it not clear at all what output are they expecting to obtain.
| common-pile/stackexchange_filtered |
Complex animations in Android (transitions, backgrounds...)
I have in mind one simple application, but I would like to add some animations, transitions and so on. What technologies should I use besides Android SDK?
Concrete example: I have an activity with an animated background in constant loop (some waves, fancy shadows and graphics - maybe do it in Flash and import is or...?) and I have a big TextView in front. When user taps on screen - text explodes or burns or something like that and new text reappear. If I click on some button it also provides some fancy animation.
Should I use AndEngine or...?
I would recommend you use libgdx: http://code.google.com/p/libgdx/
It has a bit of a learning curve, but its really flexibel
| common-pile/stackexchange_filtered |
Laravel Echo with Soketi Not Broadcasting Events - Configuration Issue?
I want to trigger an event when a user's feed (of articles) is being updated. When the feed is updated, a FeedGenerated event is being broadcasted, and I would like my front-end to catch the event to allow user to manually refresh his feed.
The FeedGenerated event is being listened by the UpdateFeedArticles listener. This listener just Logs something for now (and it logs successfully).
The event is broadcasted as follow:
FeedGenerated::broadcast($user);
The FeedGenerated event:
<?php
namespace App\Providers;
use App\Models\User;
use Illuminate\Broadcasting\Channel;
use Illuminate\Broadcasting\InteractsWithBroadcasting;
use Illuminate\Broadcasting\InteractsWithSockets;
use Illuminate\Broadcasting\PrivateChannel;
use Illuminate\Contracts\Broadcasting\ShouldBroadcast;
use Illuminate\Foundation\Events\Dispatchable;
use Illuminate\Queue\SerializesModels;
class FeedGenerated implements ShouldBroadcast
{
use Dispatchable, InteractsWithSockets, SerializesModels, InteractsWithBroadcasting;
/**
* The name of the queue connection to use when broadcasting the event.
*
* @var string
*/
public string $connection = 'redis';
/**
* The name of the queue on which to place the broadcasting job.
*
* @var string
*/
public string $queue = 'default';
/**
* Create a new event instance.
*/
public function __construct(public User $user)
{
//
}
/**
* Get the channels the event should broadcast on.
*
* @return array<int, \Illuminate\Broadcasting\Channel>
*/
public function broadcastOn(): array
{
return [
new Channel("user"),
];
}
}
The UpdateFeedArticle listener:
<?php
namespace App\Providers;
use App\Providers\FeedGenerated;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Queue\InteractsWithQueue;
class UpdateFeedArticles
{
/**
* Create the event listener.
*/
public function __construct()
{
//
}
/**
* Handle the event.
*/
public function handle(FeedGenerated $event): void
{
\Log::debug('Event triggered', ['event' => $event, 'listener' => $this]);
}
}
Laravel Echo instanciation:
import Echo from 'laravel-echo';
import Pusher from 'pusher-js';
window.Pusher = Pusher;
window.Websocket = new Echo({
broadcaster: 'pusher',
key: import.meta.env.VITE_PUSHER_APP_KEY,
cluster: import.meta.env.VITE_PUSHER_APP_CLUSTER,
wsHost: import.meta.env.VITE_PUSHER_HOST,
wsPort: import.meta.env.VITE_PUSHER_PORT,
wssPort: import.meta.env.VITE_PUSHER_PORT,
forceTLS: false,
encrypted: false,
disableStats: true,
enabledTransports: ['ws', 'wss'],
});
Websocket.connector.pusher.connection.bind('connected', () => {
console.log('✅ Soketi Connected!');
});
Websocket.connector.pusher.connection.bind('disconnected', () => {
console.log(' Soketi Disconnected!');
});
Websocket.connector.pusher.connection.bind('failed', () => {
console.log('❌ Soketi Connection Failed!');
});
Here is my docker-composer.yml
version: '3'
networks:
network_shapes:
ipam:
driver: default
driver: bridge
services:
phpfpm:
container_name: '${APP_NAME}'
build:
args:
user: '${WWWUSER}'
uid: '${WWWGROUP}'
context: ./docker
dockerfile: Dockerfile
working_dir: /var/www/html
volumes:
- ./:/var/www/html
- 'tmpfiles:/tmp'
networks:
- network_shapes
depends_on:
- mariadb
restart: unless-stopped
soketi:
image: 'quay.io/soketi/soketi:latest-16-alpine'
environment:
SOKETI_DEBUG: '${SOKETI_DEBUG:-1}'
SOKETI_METRICS_SERVER_PORT: '${SOKETI_METRICS_SERVER_PORT}'
ports:
- '${SOKETI_PORT:-6001}:6001'
- '${SOKETI_METRICS_SERVER_PORT:-9601}:9601'
networks:
- network_shapes
depends_on:
- redis
nginx:
image: ${NGINX_IMAGE}
networks:
- network_shapes
ports:
- '<IP_ADDRESS>:${APP_PORT}:80'
volumes:
- ./docker/nginx/tkt.conf:/etc/nginx/conf.d/default.conf
- ./:/var/www/html
- 'tmpfiles:/tmp'
links:
- phpfpm
depends_on:
- mariadb
restart: unless-stopped
mongo:
...
redis:
image: 'redis:latest'
ports:
- '<IP_ADDRESS>:${REDIS_PORT:-6379}:6379'
volumes:
- 'redis_volume:/data'
networks:
- network_shapes
healthcheck:
test: [ "CMD", "redis-cli", "ping" ]
retries: 3
timeout: 5s
restart: unless-stopped
mariadb:
...
mailhog:
...
volumes:
tmpfiles:
driver: local
mariadb_volume:
driver: local
redis_volume:
driver: local
Here is my Nginx tkt.conf
server {
listen 80;
server_name localhost www.localhost .localhost *.localhost;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/html/public;
client_max_body_size 100M;
location /mailhog/ {
proxy_pass http://mailhog:8025/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
chunked_transfer_encoding on;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_redirect off;
proxy_buffering off;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass phpfpm:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location /websocket/ {
proxy_pass http://<IP_ADDRESS>:6001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_read_timeout 86400; # Adjust to your needs
}
}
The situation
When running the command (that broadcast the event):
docker exec -t shapppes php artisan feed:generate 01h739jr4vk8pq3yefsp22c964
Scoring articles for Sonny Stroman...
Feed created.
Horizon logs the following:
docker exec -t shapppes php artisan horizon
Horizon started successfully.
2023-08-07 13:04:56 App\Actions\Article\Feed\GenerateFeed .......... RUNNING
2023-08-07 13:04:57 App\Actions\Article\Feed\GenerateFeed .... 899.66ms DONE
2023-08-07 13:04:57 App\Providers\FeedGenerated .................... RUNNING
2023-08-07 13:04:57 App\Providers\FeedGenerated ............... 61.82ms DONE
Redis is successfully querying (seen via Laravel Telescope)
eval -- Push the job onto the queue...
redis.call('rpush', KEYS[1], ARGV[1])
-- Push a notification onto the "notify" queue...
redis.call('rpush', KEYS[2], 1) 2 queues:default queues:default:notify {"uuid":"bf2e429e-7702-4758-bedd-d072914a3215","displayName":"App\\Providers\\FeedGenerated","job":"Illuminate\\Queue\\CallQueuedHandler@call","maxTries":null,"maxExceptions":null,"failOnTimeout":false,"backoff":null,"timeout":null,"retryUntil":null,"data":{"commandName":"Illuminate\\Broadcasting\\BroadcastEvent","command":"O:38:\"Illuminate\\Broadcasting\\BroadcastEvent\":14:{s:5:\"event\";O:27:\"App\\Providers\\FeedGenerated\":1:{s:4:\"user\";O:45:\"Illuminate\\Contracts\\Database\\ModelIdentifier\":5:{s:5:\"class\";s:15:\"App\\Models\\User\";s:2:\"id\";s:26:\"01h739jr4vk8pq3yefsp22c964\";s:9:\"relations\";a:4:{i:0;s:4:\"page\";i:1;s:14:\"page.bookmarks\";i:2;s:10:\"page.image\";i:3;s:4:\"tags\";}s:10:\"connection\";s:5:\"mysql\";s:15:\"collectionClass\";N;}}s:5:\"tries\";N;s:7:\"timeout\";N;s:7:\"backoff\";N;s:13:\"maxExceptions\";N;s:10:\"connection\";N;s:5:\"queue\";N;s:15:\"chainConnection\";N;s:10:\"chainQueue\";N;s:19:\"chainCatchCallbacks\";N;s:5:\"delay\";N;s:11:\"afterCommit\";N;s:10:\"middleware\";a:0:{}s:7:\"chained\";a:0:{}}"},"telescope_uuid":"99d54a11-c8fc-489f-8676-a575ab0466bf","id":"bf2e429e-7702-4758-bedd-d072914a3215","attempts":0,"type":"broadcast","tags":["App\\Models\\User:01h739jr4vk8pq3yefsp22c964"],"silenced":false,"pushedAt":"1691413497.6378"}
But nothing appears to be caught by Soketi (no Docker log).
How to fix this issue:
Use your docker's service name with Laravel (soketi in my case), but use <IP_ADDRESS> with Laravel Echo (js).
| common-pile/stackexchange_filtered |
Why doesn't the Unix command work in the Perl script?
I want to extract part of a text file starting at a certain pattern and store it in another file. My Perl script takes a single argument as the input text file. So far, I have:
my $INPUT = $ARGV[0];
my $LINES_TO_DUMP = 4000;
my $startline = `egrep -n "^PATTERN" $INPUT | head -1 | cut -f1 -d:`;
# THIS LINE DOESN'T WORK
system("tail +$startline $INPUT | head -$LINES_TO_DUMP > extracted.txt");
When I run my program, it gets 'stuck' running the second command (the egrep command works, and stores the correct number). I've run the command in my terminal to make sure it works.
What is going wrong?
But when I run it in the terminal it takes only half a second?
You might want to sanitize your $INPUT The way you are using it now it could be used to cause a lot of damage. Imagine a user passes something like | rm -rf /. Make sure it's a file name. Maybe with die 'wrong argument' if $INPUT =~ m/[|]/; Also do not try with my sample input, it might wipe your system!!!
You're right, I really need to add proper error checking/handling...I'm always lazy and don't add it until the end when I absolutely must
You could also use Getop::Long or similar. Or taint-mode. That would help a lot here.
It's likely that your $startline has a newline in it, as it's consumed from command output. You should confirm this and then use chomp() on $startline prior to your system() call.
Here's the perldoc for chomp as suggested.
This seems like it should be a comment.
@HunterMcMillen Comments require reputation 50.
@razor What you posted is not an answer in its current form. You could do several things to make it answer-worthy. e.g: look at the page @simbabque linked. You could also link to the perldoc page for chomp.
In addition to what @Hunter said, you can also rephrase it to sound more like you know what you are talking about. That usually gives people who read your answer more confidence and they are more likely to trust you (and upvote).
I can't accept it for another 3 minutes, but I will when I can
Also don't "respond" to comments in your answer. Just edit it so that new readers see it as a good source of information. Just leave a comment saying something like "thanks for the suggestion @hunter, I added that to the answer", upvote the comment and move on. That way you will create a high-quality answer that half a year later someone new instantly understands.
Thanks for the suggestions to a newbie. Advice taken.
I would suggest to have the chomp link to the perldoc, drop the line with with the Here's the... and instead have an example. That would be along the lines of my $startline = egrep ...; chomp $startline; # ...`. But in a block. :)
| common-pile/stackexchange_filtered |
Embedding depending on URL
I am trying to do something that seems as if it should be simple but it's baffling me!
I have a template page that needs to embed something different depending on the URL.
There are three possible cases.
Case one is that segment_2 is blank. In this case I need to embed option A. Simple enough I think.
In Case two, segment_2 is not blank and contains the url_title of one of the products in this channel. I know the channel. I need to embed option B.
In case three, segment_2 is not blank and contains something else that is NOT the url_title of one of my products in this channel. Again I know the channel. I need to embed option A - the same as in the first case.
I have tried putting an if inside the exp:channel:entries tag pair, comparing {url_title and {segment_2) - that doesn't seem to work.
Rather than having anyone try to debug my code, how would anyone advise doing this? I am aware that I could be arranging my site in a different way, but I am where I am!
Here's a no add-on way of doing this. I'm assuming you're not trying to get pagination going. That would change things.
{if segment_2 == ""}option A embed or direct code{/if}
{if segment_2 != ""}
{exp:channel:entries channel="products" url_title="{segment_2}" require_entry="yes" ... }
option B embed passing all variables necessary - or just put code here directly
{if no_results}option C embed{/if}
{/exp:channel:entries}
{/if}
Thank you very much Stephen! I knew there had to be a simple way. I'll delete all the php variables I had there to try and get it going!
| common-pile/stackexchange_filtered |
How do I restrict network access to LAN for visitors
How do I restrict network access to LAN for visitors?
In my organization's premises any outside visitor can come and connect to the LAN ports in our meeting rooms. We have seen that they are able to ping internal systems. How can I prevent this issue?
The solution really depends on what your environment is, and what you're willing to put into place. There are a few solutions that come into mind, including the ones mentioned in other answers here:
Separate VLANs for public meeting spaces, and internal workstations. The downside of this is that a smart attacker can perform a VLAN hop (depending on the implementation), and end up in your workstation VLAN.
802.1X, which requires you to implement a RADIUS server, and a fair bit of management on user workstations to ensure that they have the right authentication profiles and certificates to talk to your network.
Network Access Control (or in Cisco language, Network Admission Control) which lets you authenticate users using LDAP, based on MAC address, or other mechanisms depending on your NAC implementation.
Of these three, VLAN is probably the cheapest in terms of effort on your part in configuring the environment - you don't have to do anything on user workstations, and you just have to ensure a proper segmentation of the network. 802.1x is more expensive in terms of effort and adoption, but still cheaper than a NAC solution. The NAC solution is probably going to be the most effective, and most expensive of the three, especially if you get a solution that doesn't need to deploy an agent on the workstations. It will still require some user training depending on how you implement it, but it is much harder to bypass a NAC control than it is to bypass VLAN restrictions.
What if isolated visitors try to connect to ports located at user workstation instead of meeting rooms. Will VLAN be able to restrict access.
No, VLANs won't restrict access. A user workstation port will be connected to the work station VLAN - VLANs have nothing to do with the computer itself, but the port on which the computer is connecting. However, NAC/802.1X can help you isolate visitors in this scenario.
The simplest solution is VLAN a virtual or logical LAN. With VLANs you can segregates your physical LAN network into different logical LAN segments i.e. Department based, based on users or application based. With the help of VLAN you can operate as build as many Virtual LANs as per your switch permits and these VLAN will be virtually isolated from each other. So what I suggest is to place your meeting rooms ports on a seperate vlan this will isolate visitors from accessing any of the internal workstations.
Edit:
VLAN are generally Subject to VLAN hopping attack i.e. switch spoofing and double tagging attacks. These reason behind these attacks is mainly due to misconfigured switch i.e. switch access port being configured as trunk port. You can review good assessment of the security of vlan from this question i.e
Why do people tell me not to use VLANs for security?
A necessary condition: router supports VLANs :)
I agree but with unmanaged switch you can do nothing though cost effective.
Sometimes commercial routers has some out-of-the-box security features. But it fact: without router's name we can say nothing.
If a router supports this, this could definitely work...except that most advanced users can just do a VLAN hop and get to the workstation VLAN.
@KarthikRangarajan how practical is vlan hopping attack if switch is properly configured i.e. guidelines provider by the vendor
@AliAhmad Not super practical, but I've been to too many places where the switch is improperly configured and/or double tagging bypasses the restrictions that are configured.
Implement Port Security (802.1X), this is various techniques for placing access control on network ports.
Sorry, bad link. Should have been more careful.
please provide proper link, I will edit the post.
| common-pile/stackexchange_filtered |
golang: cannot recover from Out Of Memory crash
Under certain circumstances, calling append() triggers an out of memory panic and it seems append() itself doesn't return nil.
How could I avoid that panic scenario and show to my user "Resource temporary unavailable" ?
Best regards,
You can't.
If the runtime can't allocate memory for append, it may not be able to recover, or communicate "Resource temporary unavailable" to the user. For example, GC might need to allocate to clean up, or the scheduler might be trying to allocate a new thread. Because there's no way to strictly control allocations in a Go program, there's no way to gracefully handle running out of memory.
All OOM conditions terminate a Go program.
This question makes me curious, there must be some way to handle this more gracefully. What about preemptively checking the systems memory conditions and giving the warning before you get to the point of panic?
@evanmcdonnal: How would you preemptively check? What you consider "used" may not be what kernel considers used. You can't rely on malloc's return, because it always gives you a pointer when overcommit is on. The kernel may also free up memory for you when needed, causing the "check" to return false positives. (and that's just on Linux, you'd have to make this cross-platform)
I'm not sure, it was just an idea for how to improve the application. If I knew of a package that could reliably tell me where my memory usage was relative to the maximum I'm allowed by the runtime then I would have used it to write an answer attempting what I described. There may be no reliable way to do it. However, I personally would take false positives/not perfect error prevention over ugly unrecoverable errors. The bar for 'just good enough' isn't set very high in these circumstances.
The correct thing to do in an OOM is to let the process die and be restarted by the watchdog - because you do have something (e.g. systemd) monitoring your production process for crashes right? It would just bite you eventually if you don't, probably on a Friday night. I go a step further and have my server process kill itself once a day, this ensures the watchdog is working correctly, and bad states are cleaned up like fragmented memory.
| common-pile/stackexchange_filtered |
How do I receive SNMP traps on OS X?
I need to receive and parse some SNMP traps (messages) and I would appreciate any advice on getting the code I have working on my OS X machine. I have been given some Java code that runs on Windows with net-snmp. I'd like to either get the Java code running on my development machine or whip up some Python code to do the same.
I was able to get the Java code to compile on my OS X machine and it runs without any complaints, including none of the exceptions I would expect to be thrown if it was unable to bind to socket 8255. However, it never reports receiving any SNMP traps, which makes me wonder whether it's really able to read on the socket. Here's what I gather to be the code from the Java program that binds to the socket:
DatagramChannel dgChannel1=DatagramChannel.open();
Selector mux=Selector.open();
dgChannel1.socket().bind(new InetSocketAddress(8255));
dgChannel1.configureBlocking(false);
dgChannel1.register(mux,SelectionKey.OP_READ);
while(mux.select()>0) {
Iterator keyIt = mux.selectedKeys().iterator();
while (keyIt.hasNext()) {
SelectionKey key = (SelectionKey) keyIt.next();
if (key.isReadable()) {
/* processing */
}
}
}
Since I don't know Java and like to mess around with Python, I installed libsnmp via easy_install and tried to get that working. The sample programs traplistener.py and trapsender.py have no problem talking to each other but if I run traplistener.py waiting for my own SNMP signals I again fail to receive anything. I should note that I had to run the python programs via sudo in order to have permission to access the sockets. Running the java program via sudo had no effect.
All this makes me suspect that both programs are having problem with OS X and its sockets, perhaps their permissions. For instance, I had to change the permissions on the /dev/bpf devices for Wireshark to work. Another thought is that it has something to do with my machine having multiple network adapters enabled, including eth0 (ethernet, where I see the trap messages thanks to Wireshark) and eth1 (wifi). Could this be the problem?
As you can see, I know very little about sockets or SNMP, so any help is much appreciated!
Update: Using lsof (sudo lsof -i -n -P to be exact) it appears that my problem is that the java program is only listen on IPv6 when the trap sender is using IPv4. I've tried disabling IPv6 (sudo ip6 -x) and telling java to use IPv4 (java -jar bridge.jar -Djava.net.preferIPv4Stack=true) but I keep finding my program using IPv6. Any thoughts?
java 16444 peter 34u IPv6 0x12f3ad98 0t0 UDP *:8255
Update 2: Ok, I guess I had the java parameter order wrong: java -Djava.net.preferIPv4Stack=true -jar bridge.jar puts the program on IPv4. However, my program still shows no signs of receiving the packets that I know are there.
Ok, the solution to get my code working was to run the program as java -Djava.net.preferIPv4Stack=true -jar bridge.jar and to power cycle the SNMP trap sender. Thanks for your help, Brian.
The standard port number for SNMP traps is 162.
Is there a reason you're specifying a different port number ? You can normally change the port number that traps are sent on/received on, but obviously both ends have to agree. So I'm wondering if this is your problem.
I don't know, though I've been told the SNMP packets are somehow non-standard, so this could be part of that...
Regardless of whether the packets are 'standard', you should still see packets of some type incoming in the above code. If you're not then I suspect a networking-related issue e.g. ports
Yes, that's what I suspect too. My hope is that someone here has experience with access ports on OS X via java or python and can point me in the right direction.
| common-pile/stackexchange_filtered |
Looking for Zigbee and Microcontroller
Looking for Zigbee and Microcontroller - Selection
What series Zigbee wireless should I get?
SAM R21 Xplained Pro for MUC to connect some sensor and zigbee.
Wireless adapter should be something special?
This's my first project so I have a lot to learn. Thank you all.
First project? In electronics? Or with a Zigbee? As it stands this is a "shopping" question, with no research done, and will be closed. Googling "zigbee wireless" returned 424,000 hits. If you edit the question to be more specific, it may be salvageable.
You do have a lot to learn - we all always do the same, no worries on that :)
This is a project which can make the eyebrows of senior engineers to lift. 1-2km with Zigbee is very very hard to reach, even in full line of sight - at least I have very bad experiences even with the dedicated long range Zigbee modules.
In order to reach that, you shall do a little research on antenna theory, and use directional antennas - which rules out the PC plug in wireless adapter alltogether.
Moreover, it seems that you made a few premature decisions (for example, picking Zigbee without considering other, better solutions, picking the microcontroller first without considering the easiest and best match for this goal). You may have perfect reasonings for picking these - but please clearly specify, why.
For example, I could do this project as:
buy a 4$ ESP8266 module with external antenna
get or make a cantenna - directional access, 1-2 km is doable
apply a cantenna to the esp and to a simple off-the-shelf wifi router, so you have a wifi link
pick a 10$ Arduino and refer to the endless esp8266 tutorials on the web.
If you are not a beginner, I would also suggest using a LoRa ISM radio module, which allows you to easily pass this distance without too much antenna magic, and you could pick a standalone microcontroller and use its power savings mode. With 2 AA batteries, the whole sensor node works for many months using ESP8266 and standard Wifi.
| common-pile/stackexchange_filtered |
Java - How to write my own exception which looks for a paticular error
Ive seen other questions on here that ask how to impliment your own exception but dont specify how to check for an exception. What i mean by that is, for example I want an exception to occur when a number entered is the number 10. How would I write my own exception to check if the number is 10 and throw an exception if it is.
Thanks in advance!
if (input == 10) throw new MyWhateverException()
"How would I write my own exception to check..." -- You don't, and your main problem could be that you've got things conceptually backwards. The exception doesn't "check" for anything, but rather is thrown by non-exception code that discovers the problem.
Throwing a custom Exception in your code:
if(input == 10){ throw new WrongNumberException("You have entered 10");}
Creating Your custom Exception class:
class WrongNumberException extends Exception{
public WrongNumberException()
{
super();
}
public WrongNumberException(String message)
{
super(message);
}
}
Key points
1.) Extend the Runtime Exception class or Exception class itself.
2.) Check for input, if 10 throw an exception with proper message.
public class ValidationException extends RuntimeException {
public ValidationException() {
super();
}
public ValidationException(String message) {
super(message);
}
public ValidationException(String message, Throwable cause) {
super(message, cause);
}
public ValidationException(Throwable cause) {
super(cause);
}
}
Main class
if(input == 10){ throw new ValidationException("Number Entered is 10");}
| common-pile/stackexchange_filtered |
How to query a user by username on controller level?
How to query a user by username on controller level?
In twig I can do:
{% set user = craft.users.username('username') %}
How to do that in controller?
Thank you for your help.
Query for it like
use craft\elements\User;
$user = User::find()->username('username')->one();
You can also get the currently logged-in user like:
use Craft;
$currentUser = Craft::$app->getUser()->getIdentity();
| common-pile/stackexchange_filtered |
How should variable data related to a point be stored in a database table?
I've been having a bit of trouble understanding how a dataset is built/managed for GIS applications.
Over the past few weeks (part time) I've been trying to understand the basics of GIS, and how it works with the likes of GeoServer, QGIS, etc... in terms of loading in layers, and displaying them, but not the style editor yet, although that looks straight forward enough.
However I'm at a loss as to how the variable data should be stored.
For example, while experimenting added a base map layer to QGIS and added a new layer with a few points on it. I was able to add some parameters to those points, such as an Id and a state boolean.
In QGIS I could add rules then, like if state is true set the point red, otherwise set it green and they would work as expected.
However, in my situation, where I have a database full of node Id's and another database full of GPS co-ordinates, I'm not sure how to format that information in order for GeoServer to be able to parse it properly from my PostgreSQL database.
At first I thought that the points would be stored in geometries and the related dynamic data would be stored in another table linked with an ID or in the same table in an new column.
In order to try deduce this further I exported my points from QGIS in GeoJSON format and got this:
{
"type": "FeatureCollection",
"name": "point_test",
"crs": { "type": "name", "properties": { "name": "urn:ogc:def:crs:OGC:1.3:CRS84" } },
"features": [
{ "type": "Feature", "properties": { "id": 1, "upgrading": 1, "nodeId": 123456 }, "geometry": { "type": "Point", "coordinates": [ 13.09283649916585, 55.808940217262837 ] } },
{ "type": "Feature", "properties": { "id": 2, "upgrading": 0, "nodeId": 654321 }, "geometry": { "type": "Point", "coordinates": [ 13.030371871652321, 55.959245727217265 ] } },
{ "type": "Feature", "properties": { "id": 3, "upgrading": 0, "nodeId": 987654 }, "geometry": { "type": "Point", "coordinates": [ 13.166037234533263, 55.924109374240864 ] } },
{ "type": "Feature", "properties": { "id": 4, "upgrading": 1, "nodeId": 456789 }, "geometry": { "type": "Point", "coordinates": [ 13.167989254143061, 55.914349276191871 ] } }
]
}
Which has the dynamic properties embedded in the JSON itself. However if I export the layer as an Esri shapefile then I get multiple files, and the properties are separated out in a separate file, in what looks a little more like a traditional database dump, along with some other files. That export looks like this on the drive:
With a few of them not being human readable.
I can import these individual layers into GeoServer no problem, but with the large amounts of data I'll be using it's not practical to add each point in QGIS and export them for use in GeoServer. I'm imagining I'll need to build a service that queries the information I need from multiple databases then builds point objects or geometries from those various sources then dumps them into a postgresql database for GeoServer to use.
So how does something like that normally look in a database? Will those dynamic properties be embedded in a json like structure and queried from there? Or will the dynamic data be stored elsewhere with some sort of ID tying it to the geometry/point?
With the shapefiles, looking at the file ersi_point_test.dbf I can see the dynamic properties stored in there, (and when I click on a point on the map served by GeoServer it displays the correct information) but I can't see a relationship between the geometries/points and the dynamic data; it's obviously located in one of the non-human readable files.
You are mixing separate questions into one. What exactly is a database to you? Are we talking about how to ingest your geodata into PostGIS via QGIS? Or is it strictly about QGIS -> GeoServer in any way?
Internally Postgis stores a geometry as a binary object (wkb). GeoJSON is simply a human-readable and very convenient interchange format. You can store data directly in JSON format, but you will lose much of the functionality of storing it directly as a geometry format. You can always convert back using ST_ASGeoJSON function. As already stated, shp files are a difference beast, and store the geometry, attributes, spatial index, etc in separate files.
@ bugmenot123 Yes, apologies, for that. There's a lot to take in with GIS, I'm a bit lost. I've been experimenting with both QGIS to understand how maps work. What I want to understand, is when properties are associated with a geometry or point, how should they be stored in the database, and how are they then referenced by geoserver? is it using filters/styles? something else? does geoserver automatically pick up the properties associated with a geometry if they are stored on the same row in the database?
I am not sure what the actual question is, so here I answer what the title says:
What you got when you exported a Shapefile is actually no like a database at all. It is a mess of loosely connected, easily deleted files. There is information about the encoding of text in the attribute file stored in another file. The projection of the geometries is stored in yet another file. The geometries of the features are separate from their attributes. It is a ancient and quite horrible mess of a format.
In your GeoJSON snippet above, you have a simple list of features. Each of them has a list of attributes and its geometry. That's so nice!
In a database like PostGIS you usually also store the geometries and attributes of the features together in the same table. Strictly speaking, if you did not do so, you would need to "join" the attributes in a separate processing step, which is not ideal.
Thankyou! So if I have table row with the geometry and an attribute, then I can access that in geoserver using filters/styles?
Yes! That's the way it's meant to be done :)
OK this is great! I think it's finally clicked! many thanks! :-D
| common-pile/stackexchange_filtered |
Do 30% of seniors get a heart attack each year?
I saw an ad somewhere that claimed 30% of seniors die each year from heart attack. This sounds way, way too much. Is it really true?
Not quite, but 1 in 4 deaths every year (in the USA) are due to heart disease.
Sounds like you misunderstood the advert. Can you link to the original ad?
@Oddthinking it was an ad on youtube, and it explicitly said "30% of seniors die of a heart attack each year. Prepare yourself yada yada yada." EDIT: Not the yada yada part :)
Is the question "get a heart attack" or "die of a heart attack"?
Was it referring to the USA? What was there product?
This doesn't even pass the smell test.
See any spot on that graph where 30% of people die in a year? Even at 85 which is the top of the graph you see less than half that rate.
Not quite
In 2000-2001, 2010-2011, and 2012-2013, roughly 30% of deaths over 65 in the USA were due to heart disease (30.9, 30.5, and 29.8).
Source: CDC
Perhaps the advertisement simply misstated the claim. Not 30% of those alive, just 30% of the deaths. Also, this is all heart disease, not just heart attacks.
As the other answer notes, overall death rates are far below 30%, so this claim certainly isn't accurately describing that.
I was simply showing the original claim was impossible. I think you nailed it as to what really is going on.
| common-pile/stackexchange_filtered |
Implementing Completion Block swift
I have implemented completion block which has a logic error. I want when the checkOutBtn is clicked checkFields is triggered first to check if all the text fields is not empty before it triggers the addingDeliveryAddress() method to insert into the database before performing the sesueway. But its not working like that when checkOutBtn is clicked it goes ahead and perform the segueway. Thanks all for your help. Thanks
@IBAction func checkOutBtn(_ sender: Any) {
checkFields { (results) in
if results {
self.addingDeliveryAddress()
}
}
}
func checkFields(_ completion: @escaping (Bool) -> ()){
if (recipientName.text?.isEmpty)! {
errorMessageLbl.textColor = UIColor.red
errorMessageLbl.text = "Enter Recipient Name"
completion(false)
}else if (recipientMobile.text?.isEmpty)! {
errorMessageLbl.textColor = UIColor.red
errorMessageLbl.text = "Enter Recipient Mobile Number"
completion(false)
}else if (recipientArea.text?.isEmpty)! {
errorMessageLbl.textColor = UIColor.red
errorMessageLbl.text = "Enter Recipient Area"
completion(false)
}else if (recipientAddress.text?.isEmpty)! {
errorMessageLbl.textColor = UIColor.red
errorMessageLbl.text = "Enter Recipient Address"
completion(false)
}
completion(true)
}
//Adding Delivery Address
func addingDeliveryAddress(){
//getting user data from defaults
let defaultValues = UserDefaults.standard
let userId = defaultValues.string(forKey: "userid")
//creating parameters for the post request
let parameters: Parameters=[
"recipientName":recipientName.text!,
"recipientPhoneNumber":recipientMobile.text!,
"recipientArea":recipientArea.text!,
"recipientAddress":recipientAddress.text!,
"nearestLandmark":recipientLandmark.text!,
"userId":Int(userId!)!
]
//Constant that holds the URL for web service
let URL_ADD_DELIVERY_ADDRESS = "http://localhost:8888/restaurant/addDeliveryAddress.php?"
Alamofire.request(URL_ADD_DELIVERY_ADDRESS, method: .post, parameters: parameters).responseJSON {
response in
//printing response
print(response)
let result = response.result.value
//converting it as NSDictionary
let jsonData = result as! NSDictionary
//if there is no error
if(!(jsonData.value(forKey: "error") as! Bool)){
self.performSegue(withIdentifier: "toCheckOut", sender: self)
}else{
let alert = UIAlertController(title: "No Delivery Address", message: "Enter Delivery Address to continue", preferredStyle: .alert)
alert.addAction(UIAlertAction(title: "Ok", style: .destructive, handler: nil))
//alert.addAction(UIAlertAction(title: "No", style: .cancel, handler: nil))
self.present(alert, animated: true)
}
}
}
Why a completion block? There is no asynchronous process.
I suggest this way which returns (directly) the error string or an empty string on success.
@IBAction func checkOutBtn(_ sender: Any) {
let result = checkFields()
if result.isEmpty {
self.addingDeliveryAddress()
} else {
errorMessageLbl.textColor = UIColor.red
errorMessageLbl.text = "Enter Recipient " + result
}
}
func checkFields() -> String {
if recipientName.text!.isEmpty {
return "Name"
} else if recipientMobile.text!.isEmpty {
return "Mobile Number"
} else if recipientArea.text!.isEmpty {
return "Area"
} else if recipientAddress.text!.isEmpty {
return "Address"
}
return ""
}
In your code you're using @escaping in closure. It's wrong as you're not doing anything asynchronous in this closure body. When using @escaping the closure is being preserve to be execute later and function’s body gets executed. That's why addingDeliveryAddress() gets triggered before checking anything. Your closure function should be @nonescaping like this..
func checkFields(_ completion: (Bool) -> ()){
if (recipientName.text?.isEmpty)! {
errorMessageLbl.textColor = UIColor.red
errorMessageLbl.text = "Enter Recipient Name"
completion(false)
}else if (recipientMobile.text?.isEmpty)! {
errorMessageLbl.textColor = UIColor.red
errorMessageLbl.text = "Enter Recipient Mobile Number"
completion(false)
}else if (recipientArea.text?.isEmpty)! {
errorMessageLbl.textColor = UIColor.red
errorMessageLbl.text = "Enter Recipient Area"
completion(false)
}else if (recipientAddress.text?.isEmpty)! {
errorMessageLbl.textColor = UIColor.red
errorMessageLbl.text = "Enter Recipient Address"
completion(false)
}
completion(true)
}
thanks but I have reliazed my mistake.. your solution is nice but I think @vadian suggestion is the best since I only set the textColor ones and only when validation failed so his code is nice and neat
| common-pile/stackexchange_filtered |
I2C (TWI): Hold SDA-line low by Slave when line has resistance between Master and Slave
Greetings to everyone!
The schema in brief:
Located at device 1 Located at device 2
Master SDA --------------- <physical connector> --------------- SDA Slave
According to I2C specifications: The Slave device must provide ACK signal holding SDA line on logical zero on success or do nothing on error, leaving I2C line high (NACK, logical one). This works fine when the resistance between Master is Slave is very low, for example, 5 milli-Ohm. But when the resistance rises to 100 or even 1000 milli-Ohm, the Slave can't hold line at appropriate logical zero. Such resistance can appear when a physical connector exists between Master and Slave.
Schematically, the SDA-signal at line with 100-1000 milli-Ohm resistance behaves like this:
Master sends some control sequence (7 bits + 1 R/W bit) to the I2C bus and the Slave responses with 1 bit ACK (logical zero).
Response with logical zero means that Slave holds line low, when Master holds line high.
Dev address W ACK
1 0 1 0 1 1 0 0 ?
5V ----- ----- ----- -----
| | | | | | ----- ~2.5V
0V | |_____| |_____| |_____ _____
Here the ACK signal is at level between 2-3 volts, so it is obviously not a logical zero.
How to "help" Slave keep SDA-line on zero (below 0.3V)? Are there are any common practices for solving this?
Thanks in advance.
==========================================================
UPDATED POST WITH DETAILS:
Double checked the I2C manual: https://www.nxp.com/docs/en/user-guide/UM10204.pdf
Page 10, section 3.1.6
The Acknowledge signal is defined as follows: the transmitter releases the SDA line
during the acknowledge clock pulse so the receiver can pull the SDA line LOW and it
remains stable LOW during the HIGH period of this clock pulse
Thus, Master must release the line by switching to high impedance mode. But according to the details posted below Master stays at high level, preventing the Slave to hold SDA line low.
Tested at Proteus 8.10
Master: MPU ATmega328P
Slave: EEPROM 24C01C
======================================
With resistor on the BUS:
======================================
======================================
Without resistor on the BUS:
======================================
======================================
I2C debugger logs compare:
======================================
======================================
C-code:
======================================
#include <avr/io.h>
#include <avr/interrupt.h>
#include <avr/sleep.h>
int main(void)
{
// Set bit rate to 400 KHz
TWBR = (8000000LU / 400000LU - 16) / 2;
// Send START
TWCR = (1 << TWINT) | (1 << TWSTA) | (1 << TWEN);
// Wait till operation is complete: Interrupt Flag is set
while ( !(TWCR & (1 << TWINT)) );
// Exit if status not: START has been transmitted
if (TWSR != 0x08) {
return 1;
}
// Load data register with: 0b_1010_000_0 (EEPROM addres + write)
TWDR = 0xA0;
// Transmit data
TWCR = (1 << TWINT) | (1 << TWEN);
// Wait till operation is complete: Interrupt Flag is set
while ( !(TWCR & (1 << TWINT)) );
// Exit if status not: SLA+W has been transmitted and ACK received
if (TWSR != 0x18) {
return 1;
}
// ... rest of code skipped
// Send STOP
TWCR = (1 << TWINT) | (1 << TWSTO) | (1 << TWEN);
return 0;
}
Compile log:
avr-gcc.exe -Wall -gdwarf-2 -fsigned-char -MD -MP -DF_CPU=1000000 -O1 -mmcu=atmega328p -o "main.o" -c "../main.c"
avr-gcc.exe -mmcu=atmega328p -o "./Debug.elf" "main.o"
avr-objcopy -O ihex -R .eeprom "./Debug.elf" "./Debug.hex"
avr-objcopy -j .eeprom --set-section-flags=.eeprom="alloc,load" --change-section-lma .eeprom=0 --no-change-warnings -O ihex "./Debug.elf" "./Debug.eep" || exit 0
Compiled successfully.
Have you factored-in that the I2C pull-up resistor will be several hundred ohms?
Not several hundred, at 5V the total pull-up should be more than 1.67k under standard conditions which all chips can use.
Your schematic should show roughly where the pull-up resistor is located and what its value is. Please edit your post to include this information. If you don't have a pull-up resistor, you should edit to clarify that point.
Updated post with details
If you are not doing this in real life but in Proteus, then it's the Proteus that's at fault. You should have mentioned that as it was very important part of what is wrong.
Sorry for that, I thought that the application with 30 years history and far from being cheap can't be mistaken in an almost simple simulation.
That will not be a problem with connector of 1 ohms.
You will be in specs even with several tens of ohms of resistance in series, but obviously the actual value depends on supply voltage and pull-up resistance value.
If you are seeing that a chip can't pull low during ACK, the problem is not the series resistance, it is something else that is supposed to not happen, most likely MCU IO pins are not used in open drain mode or something similar.
Edit:
The problem is that Proteus is wrong and fails to simulate I2C peripheral usage. In real life the AVR would make the pins go into open drain mode as soon as TWEN is set.
This is how it should be, but something goes wrong ...
| common-pile/stackexchange_filtered |
What evolutionary reason is there for having the urinary duct and reproductive organs so close together?
As the old joke goes, "God must have been a civil engineer. Who else would put a waste facility straight through a recreational area?"
But maybe it wasn't God. Is there any evolutionary reason (or background for) having the urinary duct and reproductive organs right next to each other (in both humans and many other vertebrates)?
To be clear, I'm not asking why isn't it elsewhere. I'm asking where the original "design" came from that spread everywhere.
Looks to me evolution aims to minimize the number of holes in a body since these are - in general - the parts with the largest risk of getting an infection.
Yes, there is.
In order to reproduce, material has to leave one organism and enter another and, in species with internal fertilisation, the eggs need to leave later. Think back to a worm-like organism: it's basically a tube with an opening at either end, those two openings are the easy options for where you can locate your reproductive transfer (and possibly egg-laying). Positioning it at the head end has obvious drawbacks in terms of accidentally consumption, so that leaves positioning it at the excretion end.
Once evolved, there is no compelling reason to fundamentally alter this bodyplan, particularly as other animals show little or no sign of sharing our revulsion at bodily excretions.
Edit: It's been a few years since I wrote this answer and I'm not really sure it is correct anymore. I've moved to working with C. elegans, which is the kind of organism that would meet that "worm-like organism" idea I was talking about, but here's the thing: they don't have their reproductive opening near either gut opening, but with its own opening located half way down the organism. If extant organisms don't follow the bodyplan I suggested, does the argument really hold?
The reason lies within our wormy chordate ancestors - an orifice used to eliminate waste can also function as an orifice to eject eggs. My vertebrate zoology is a little bit rusty so I would suggest picking up any first-course book on the subject, most of them cover the evolution of urinary/reproductive organs extensively.
I actually like this answer best - the body plan is ancient and mostly hasn't changed...
Is there any evolutionary reason (or background for) having the urinary duct and reproductive organs right next to each other (in both humans and many other vertebrates)?
Because it works.
Evolution doesn't grow things or remove things because they might be funky or useful, Evolution is dictated by the survivability of organisms. If a change helps an organism survive to pass on its genes, it's kept. If it doesn't, it's eliminated from the gene pool over time. As long as a change doesn't significantly help or harm, it can be kept for hundreds, thousands, even millions of generations.
Evolutionarily speaking they are where they are because we're mammals that were formally quadripeds that became bipeds. The stereotypical quadripedal design has the reproductive organs located near the pelvis. Probably because they're best protected there. Powerful hind limbs for kicking, and nasty teeth and claws up front. It also gives easy access for mating, which is pretty important if you want to continue the species. On quadripeds it could be pretty darn awkward if the genitalia were located near the ribcage
Really, you could come up with a lot of reasons why the organs are located where they are. There are a lot of advantages, and a lot of disadvantages, but in the end the simplest explanation is going to be "Because we can survive with them there." The advantages have outweighed the disadvantages for our species' history thus far, and until that changes they'll probably just stay where they are.
your middle paragraph explains why the reproductive system is located where it is. But why is the urinary system located here as well, not say on the abdomen of the quadruped.
@Chris Could you clarify? The kidneys and bladder are located within the abdomen of quadrupeds (and us).
Yes I meant urinary duct, not system, which is located where the reproductive system is. That is, why are these located in the same spot as each other. Your second paragraph explained for the reproductive system, but not the urinary duct.
Nice job giving an answer that avoids the pitfalls of naive adaptationism. The presence of a trait doesn't mean it's a good idea, just that it's not bad enough to kill anyone (and/or kill their chances at reproducing).
I was asking where it came from in the first place, not why it didn't change.
@JoeZeng - That's fine. Next time make that your first question instead of the question that's asked nearly three weeks after the fact via an edit. We can only answer what's on the page, we're not psychic.
I thought it had been clear enough in the initial ask, but apparently I was wrong.
Human body has three channels for excretion of substances, one for each of the three physical phases of matter: lungs for gaseous substances, such as CO2, anus for solid substances such as feces and urinal channel for liquid substances.
It turned out that sperm is liquid, so it uses the same channel as liquid urine.
Excretion of substances is a great reason for them to exist, but not for their positioning. Just because sperm is a liquid isn't a great reason for the penis to be located between the legs.
Anixx, one word - beans.
@MCM, the question doesn't ask why the penis is located between the legs, but rather asks why the urinary duct is close to the reproductive organs. Anixx is trying to say that it is more efficient to utilize the one pipe for both liquids, just as we use our lungs to both breathe in oxygen, and expel carbon dioxide, rather than have two separate systems.
@Chris - I understand your point, but my critique was that there are pretty common examples of organisms that don't put the excretion of phases of matter together, so I didn't find Anixx's argument as full as it could be.
Then why when one vomits, do we use our mouth? Looks to me that vomit is liquid as well.
| common-pile/stackexchange_filtered |
sound volume settings doesn't do anything
I installed Ubuntu the other day, and I am quite new to it. I've noticed that the sound volume setting at the top right hand side of the screen doesn't affect the volume of my media players (it may affect system sounds but i haven't really noticed any difference either).
I turn it all the way up it doesn't affect the sound, all the way down to mute and the same thing.
Is this normal? Should that volume setting affect only system sounds or should it also affect the sounds from media players? If so how do I remedy this problem?
I'm using a combination of VLC media player and the Google chrome Plex App.
Running Ubuntu 16.04.
This could help, once my friend had this issue and following these steps fixed it:
Removed ~/.config/pulse directory.
Started Pulse Audio.
rm -rf ~/.config/pulse
pulseaudio --start
Incase if its still showing errors you will have to reinstall pulse audio and equalizer.
sudo apt-get purge pulseaudio pulseaudio-equalizer
sudo apt-get install pulseaudio
sudo apt-get install pulseaudio-equalizer
pulseaudio --start
Try running sudo apt-get install pavucontrol
Volume control in Ubuntu works for most everyone so recommending PulseAudio volume control as an alternative doesn't solve the problem. First you should find out what type of hardware is involved and checking if there are bug reports on it or other users of the same hardware have the same problem. The other answer of resinstalling PulseAudio is a good one that might work but the OP never responded.
| common-pile/stackexchange_filtered |
Ionic capacitor Facebook login problem creating the facebook app step
I'm trying to add the Facebook login on my Ionic Capacitor Firebase app but i'm having a lot of problems creating the app on facebook developers site and associating it with my app, now i'm trying only with Android platform.
The problem is that it seems that to create the facebook app is mandatory to have the app published on store, because when i try to change public_profile access for example to advanced, is showing a modal saying that is not finding on the play store the app name. But I can't believe that is necessary to have the app on play store before integrate the facebook login on my app.
What i'm doing wrong? I'm following this tutorial https://devdactic.com/ionic-facebook-login-capacitor/ but on the video is not adding a store option on Android, and in my case i need to add google play or some other to save the platform.
I'm completely lost.
Thanks.
Facebook should allow you to create an app in developer mode without necessarily entering the store code.
| common-pile/stackexchange_filtered |
How can I calculate n-th permutation (or tell the lexicographic order of a given permutation)?
This question has two parts, though since I'm trying to compe up with a Prolog implementation, solving one will probably immediately lead to a solution of the other one.
Given a permutation of a list of integers {1,2,...,N}, how can I tell what is the index of that permutation in lexicographic ordering?
Given a number k, how can I calculate k-th permutation of numbers {1,2...,N}?
I'm looking for an algorithm that can do this reasonably better than just iterating a next permutation function k times. Afaik it should be possible to directly compute both of these.
What I came up with so far is that by looking at numbers from the left, I can tell how many permutations were before each number at a particular index, and then somehow combine those, but I'm not really sure if this leads to a correct solution.
Here's one answer for deriving the index http://stackoverflow.com/questions/24215353/how-to-find-the-index-of-a-k-permutation-from-n-elements/24234429#24234429
I'll just give the outline of a solution for each:
Given a permutation of a list of integers {1,2,...,N}, how can I tell what is the index of that permutation in lexicographic ordering?
To do this, ask yourself how many permutations start with 1? There are (N - 1)!. Now, let's do an example:
3 1 2
How many permutations of 1 2 3 start with 1 or 2? 2*2!. This one has to be after those, so its index is at least 2*2! = 4. Now check the next element. How many permutations of 1 2 start with 0? None. You're done, the index is 4. You can add 1 if you want to use 1-based indexing.
Given a number k, how can I calculate k-th permutation of numbers {1,2...,N}?
Given 4, how can we get 3 1 2? We have to find each element.
What can we have on the first position? If we have 1, the maximum index can be 2! - 1 = 1 (I'm using zero-based indexing). If we have 2, the maximum can be 2*2! - 1 = 3. If we have 3, the maximum can be 5. So we must have 3:
3
Now, we have reduced the problem to finding the 4 - 2*2! = 0-th permutation of 1 2, which is 1 2 (you can reason about it recursively as above).
Think how many permutations start with the number 1, how many start with the number 2, and so on. Let's say n = 5, then 24 permutations start with 1, 24 start with 2, and so on. If you are looking for permutation say k = 53, there are 48 permutations starting with 1 or 2, so #53 is the fifth of the permutations starting with 3.
Of the permutations starting with 3, 6 each start with 31, 32, 34 or 35. So you are looking for the fifth permutation starting with (3, 1). There are two permutations each starting with 312, 314 and 315. So you are looking for the first of the two permutations starting with 315. Which is 31524.
Should be easy enough to turn this into code.
You can also have a look at the factorial number system, especially the part regarding permutations. For a given number k, you are first supposed to find its factorial representation, which then easily gives the required permutation (actually, (k+1)-st permutation).
An example for k=5 and numbers {1,2,3}:
5 = 2*2! + 1*1! + 0*0! = (210)_!
so the factorial representation of 5 is 210. Let's now map that representation into the permutation. We start with the ordered list (1,2,3). The leftmost digit in our factorial representation is 2, so we are looking for the element in the list at the index 2, which is 3 (list is zero-indexed). Now we are left with the list (1,2) and continue the procedure. The leftmost digit in our factorial representation, after removing 2, is 1, so we get the element at the index 1, which is 2. Finally, we are left with 1, so the (k+1)-st (6th) permutation of {1,2,3} is {3,2,1}.
Even though it takes some time to understand it, it is quite efficient algorithm and simple to program. The reverse mapping is similar.
| common-pile/stackexchange_filtered |
traveling salesman without return and with given start and end cities
I am looking for the name of the following problem: traveling salesman problem (visit each city exactly once) but without returning to the start city and with visiting a given city at the end. In other words, I would like to specify the start and end cities, and I don't want to go back to the start city.
Thanks!!!
I doubt this has its own name, as it's trivially isomorphic to the normal TSP.
From standard TSP to this: Given a directed weighted graph for TSP, with a start/end node, split the start/end node into a start node and an end node, with all the outgoing edges on the start node and all the incoming edges on the end node.
From this to standard TSP: Remove all outgoing edges from the end node; add a single edge from the end node to the start node (which is now the start/end node).
The problem you're describing, where you want to visit each city exactly once, with a specified start and end city, and without returning to the start city, is commonly known as the "Open Traveling Salesman Problem" (Open TSP). In the standard Traveling Salesman Problem (TSP), the objective is to find the shortest possible route that visits each city exactly once and returns to the starting city. The Open TSP relaxes the requirement of returning to the starting city, allowing for a different ending city.
| common-pile/stackexchange_filtered |
javascript minus sign in variable
I've got an XML-file. After converting it to JSON I want to access some content within. This was possible. However, some variable within the JSON contain a - (minus sign). When I try to access it, Javascript interpret this as a calculation. Is the only way to workaround this to replace all the - signs?
Can you some code?
JSON cannot contain variables. Valid JSON is always single object or array
Hyphen is not a minus sign.
You can use brackets notation:
yourJson['ab-cd']; // access to 'ab-cd' property that contains '-' sign
If you want to define or access properties with special characters in them, you need to use string property names:
var obj = {
'some-string-with-hyphens': true,
'another-one': true
};
var another = obj['another-one'];
| common-pile/stackexchange_filtered |
Django contentType and read only database
a newbie question
I have a Django project with two applications core and nagios each with its model
I have two database connections default and nagios
The nagios database is read only
when I use python manage.py syncdb this error appears
Table 'nagios.django_content_type' doesn't exist"
why does the contentType application need to create a content_type table in the nagios database?
and
how can I force the contentType application to check against the default database connection only
Sounds like you just need to set up an automatic database router.
| common-pile/stackexchange_filtered |
TestNG - Read custom annotation details
Requirement: Read custom annotation details and generate report for all test classes of all suites.
Tried Solution:
Implemented custom listener using ITestListener. But don't see direct way to get custom annotation details used as part of test methods apart from below way.
@Override
public void onStart(ITestContext context) {
ITestNGMethod[] testNGMethods = context.getAllTestMethods();
for (ITestNGMethod testNgmethod : testNGMethods) {
Method[] methods = testNgmethod.getRealClass().getDeclaredMethods();
for (Method method : methods) {
if (method.isAnnotationPresent(MyCustomAnnotation.class)) {
//Get required info
}
}
}
}
Inner loop triggers almost n*n(number of methods) times for each test class. I can control it by adding conditions.
As I'm new bee to TestNG framework, would like to know the better solution to achieve my requirement i.e. generating report by reading custom annotation details from all test methods from all suites.
Here's how you do it.
I am using the latest released version of TestNG as of today viz., 7.0.0-beta3 and using Java8 streams
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
import org.testng.ITestContext;
import org.testng.ITestListener;
import org.testng.ITestNGMethod;
public class MyListener implements ITestListener {
@Override
public void onStart(ITestContext context) {
List<ITestNGMethod> methodsWithCustomAnnotation =
Arrays.stream(context.getAllTestMethods())
.filter(
iTestNGMethod ->
iTestNGMethod
.getConstructorOrMethod()
.getMethod()
.getAnnotation(MyCustomAnnotation.class)
!= null)
.collect(Collectors.toList());
}
@Retention(java.lang.annotation.RetentionPolicy.RUNTIME)
@Target({METHOD, TYPE})
public static @interface MyCustomAnnotation {}
}
that's great, will make it for loop to read few details from custom annotation.
| common-pile/stackexchange_filtered |
How to relate these two definition of sheaf of regular function
In Gortz and Wedhorn's Algebraic geometry book the sheaf of regular function on $U$ is defined as :
Definition 1.39. Let $X$ be an irreducible affine algebraic set and let $\emptyset \neq U \subseteq X$ be open. We denote by $\mathfrak{m}_x$ the maximal ideal of $\Gamma(X)$ corresponding to $x \in X$ and by $\Gamma(X)_{\mathfrak{m}_x}$ the localization of the affine coordinate ring with respect to $\mathfrak{m}_x$. We define
$$
\mathscr{O}_X(U)=\bigcap_{x \in U} \Gamma(X)_{\mathfrak{m}_x} \subset K(X) .
$$
We let $\mathscr{O}_X(\emptyset)$ be a singleton.
This definition is not the same as what I typically learned, which is pointwise defined to be the regular if exist some $f/h$ represent the map and $h\ne 0$ aroud that point.
I want to see that these two definition are the same , But I don't know how to write it down precisely.
A side question, the element in $\bigcap_{x \in U} \Gamma(X)_{\mathfrak{m}_x} $ may not have single representative correct?(I mean there may not have a $f/g$ with $g\notin \mathfrak{m}_x$ for all $x$)
For the side question Gortz and Wedhorn already gives a remark in 1.41 the answer is yes there may not have a single representative element
Let $x\in U$. Any element $\in
\mathscr{O}_X(U)=\bigcap_{x \in U} \Gamma(X)_{\mathfrak{m}_x} \subset K(X)$ has a representation $f/g$ with $g(x)\neq 0$($\iff g\in \Gamma (X)\setminus \mathfrak{m}_x$). We can define a map $U \to K$ using the representation. This establishes Görtz-Wedhorn $\Longrightarrow$ your definition.
Conversely, suppose we have a map which can be represented as $f/g\in K(X)$ locally. If we have two such representations $f_1/g_1, f_2/g_2$, the intersection of their domains of definition $D(g_1)\cap D(g_2)=D(g_1g_2)\neq \varnothing$ since $X$ is irreducible. Then Lemma 1.38 ensures $f_1/g_1=f_2/g_2$ as elements of $K(X)$. This shows the map $$\{\text{locally representable map on }U\}\to
\mathscr{O}_X(U)=\bigcap_{x \in U} \Gamma(X)_{\mathfrak{m}_x} \subset K(X)$$
is well defined, so that we have your definition $\Longrightarrow $Görtz-Wedhorn.
Thank you @Acrobatic , I see now the alternative definition on the post is just the image of $\text{Gortz-Wedhorn}$ definition via the map $\mathscr{O}_X(U)\to \text{Map}(U,k)$ correct?
Do you have example of only locally representable map ? I feel a bit abstract, it will be much better to understand it with example @Acrobatic
@ yili Yes it is. I noticed now that the authors have constructed the first map after definition 1.39.
Got it, thank you do you have examples?
I am sorry but I do not know any such examples other than example 5.36. The authors use an argument on dimension. What is happening there is, intuitively, intersection of the zeros of two polynomials must be a surface in $\mathbb{A}^4$, as in $\mathbb{R}^3$ intersection of two plane is a line.
Thank you Acrobatic, I will look at this example when reading that chapter
| common-pile/stackexchange_filtered |
Use outer class in my main.cpp
I'm using other's class for generating delaunay triangle. Its class has two files: VoronoiDiagramGenerator.h and VoronoiDiagramGenerator.cpp. It is all encapusulated into a class.
I want to call the class method in my main.cpp file, so I should include the VoronoiDiagramGenerator.h file.
If i want to use gcc or g++, how do I set the cmd parameter? Before I just used gcc -o main.cpp or something similar.
If I want to use makefile, how would I write it?
If I want to compile the two files (VoronoiDiagramGenerator.h && VoronoiDiagramGenerator.cpp) into a So file, how should I do?
I just test the souce code. When i under windows vc++, add the .cpp and .h into the project workspace, it will be OK. If i just include the .h file in my main file and it give some similar error like linux.
some unreference error.
so i think in my main file just include the out class headfile and gcc main.cpp is error.
In your main.cpp include VoronoiDiagramGenerator.h and use it.
If .h file is not in your current or project directory, make sure to include -Idirectory
yes. i did do like this. BUT under ubuntu, it give me some method unrefenced error info. I also use it under windows using vc++. but have no error. So I think there must be some wrong.
| common-pile/stackexchange_filtered |
ActiveRecord::HasManyThroughOrderError for the has many through association
We are upgrading the Rails version from 5.0.6 to 5.1.4
I have the following code :
class Profile < ApplicationRecord
simple_roles
has_many :profile_roles
has_many :roles, through: :profile_roles
end
class ProfileRole < ApplicationRecord
belongs_to :role
belongs_to :profile
end
class Role < ApplicationRecord
has_many :profile_roles
has_many :profile, through: :profile_roles
end
I got the error while doing the Profile.first.roles .
ActiveRecord::HasManyThroughOrderError: Cannot have a has_many :through association 'Profile#roles' which goes through 'Profile#user_roles' before the through association is defined.
can anyone suggest me any solution for this.
It's not just a typo is it?
has_many :profile, through: :profile_roles
:profile should be :profiles I think?
This do the job @rohit?
There's a typo in your association:
In role.rb, try replacing
has_many :profile, through: :profile_roles with
has_many :profiles, through: :profile_roles in
Does this work now?
| common-pile/stackexchange_filtered |
php errors are not being shown despite display_errors being on
My problem is that I have a site, which isn't being shown and when I get rid of my php code it is shown, but there are no php errors being shown at all.
Based on answers to other questions I've tried all of these solutions:
ini_set('display_errors', 1);
ini_set('error_log', 'phplog.log');
error_reporting(E_ALL);
And set the display_errors in php.ini to on, which is also shown when I use phpinfo();, which was said to check in another answer. So why isn't anything being shown?
This is what I have:
index.php:
<?php
ini_set('display_errors', 1);
ini_set('error_log', 'phplog.log');
error_reporting(E_ALL);
phpinfo();
include("cubenex-api.php");
include("php.ini")
$connections = new Connections();
$connections->connect();
$user = new User($connections, "test");
?>
And loads of html and css which isn't important
And the cubenex-api.php file:
<?php
ini_set('display_errors', 1);
ini_set('error_log', 'phplog.log');
error_reporting(E_ALL);
class User{
public $NAME, $STATUS, $SINCE, $SEEN, $FAVOURITE, $LOCATION, $BIRTHDATE, $ONLINE, $LANGUAGES;
private $PASSWORD;
public $RANK;
public $PERMISSIONS;
public $EMAIL;
public $GUEST;
public function __construct($connections, $uuid){
$pl = $connections->$players->query("SELECT * FROM profiles WHERE UUID = '".$uuid."'");
if ($p->num_rows > 0){
$p = $pl->fetch_assoc();
$g = $connections->$players->query("SELECT * FROM general WHERE UUID = '".$uuid."'");
$general = $g->fetch_assoc();
$NAME = $general["NAME"];
$RANK = $general["RANK"];
$EMAIL = $p["EMAIL"];
$STATUS = $p["STATUS"];
$SINCE = $p["SINCE"];
$SEEN = $p["SEEN"];
$PASSWORD = $p["PASSWORD"];
$FAVOURITE = $p["FAVOURITE"];
$LOCATION = $p["LOCATION"];
$BIRTHDATE = $p["BIRTHDATE"];
$ONLINE = $p["ONLINE"];
$LANGUAGES = $p["LANGUAGES"];
$PERMISSIONS = new Permissions($RANK, $connections);
$GUEST = false;
}else{
$NAME = "Guest";
$PERMISSIONS = new Permissions("Guest", $connections);
$GUEST = true;
}
}
}
class Permissions{
private $permissions;
private $cons;
public function __construct($rank, $connections){
$cons = $connections;
$result = $connections->$website->query("SELECT * FROM permissions WHERE RANK = '".$rank."'");
$permissions = explode(",", $result["PERMISSIONS"]);
}
public function __construct(){
$permissions = array();
}
public function hasPermission($permission){
return in_array($permission, $permissions);
}
public function givePermission($permission){
array_push($permissions, $permission);
}
public function takePermission($permission){
if (in_array($permission, $permissions)){
unset($permissions[array_search($permission, $permissions)]);
}
}
public function upload($rank){
$cons->$website->query("UPDATE permissions SET PERMISSIONS = '".implode(",", $permissions)."' WHERE RANK = '".$rank."'");
}
}
class Connections{
public $players;
public $website;
public function connect(){
$players = new mysqli("*******", "*****", "*****", "*******");
$website = new mysqli("*******", "*****", "******", "******");
}
}
?>
I hid the mysql login details, but I know that the ones I'm using are correct and working.
If there's a syntax error in your PHP code then activating error reporting in PHP will not work because the PHP will never be executed due to the syntax error.
In your Connections class - your variables in connect() should probably reference $this->players etc as they are currently only using function local variables.
@NigelRen That worked before though, so it can't be causing the error.
I'm not saying it's caused the error you have, but its still something that should be fixed.
include("php.ini")
that has to hurt !!
I'm not certain that this is your only error but that line is definitely incorrect and will make your scrip crash
remove that no matter what
the path to the php.ini file is defined in your php/server setting, this will vary depending on your system but in no circumstance including that file within you php script can be valid, also the semi-coma after it is missing
Oh dear, that was it. I have no actual idea how I oversaw that. I'm sorry for wasting your time and thanks for helping.
also as mentionned Nigel Ren your script has issues on how you are calling your attributes/methods like $connections->$players or $cons->$website, these are incorrect and should look something like $connections->players and $this->cons->website. This was not your question but you definitely want to fix those
The main reason you're not getting any error, is because display_startup_errors is off.
include("php.ini") wasn't your issue. (Although Nathanael brings up a valid point, that it's likely you're not including it correctly.) You can very well include a php.ini file if you so desire (Keep in mind it will process as a PHP file, and not an ini or configuration file). The error lies with the lack of a semicolon (;) at the end of that line. With display_startup_errors set on, it would have displayed any compiler time warnings and errors.
| common-pile/stackexchange_filtered |
Condition if element exist still continues even if the element is not existing in Selenium Python
I have this code that if the element exists, it will print the innerHTML value:
def display_hotel(self):
for hotel in self.hotel_data:
if hotel.find_element(By.CSS_SELECTOR, 'span[class="_a11e76d75 _6b0bd403c"]'):
hotel_original_price = hotel.find_element(By.CSS_SELECTOR, 'span[class="_a11e76d75 _6b0bd403c"]')
hotel_original_price = hotel_original_price.get_attribute('innerHTML').strip().replace(' ', '')
print(f"Original:\t\t\t{hotel_original_price}")
When I proceed and run the program, I get an error of
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"span[class="_a11e76d75 _6b0bd403c"]"}
I was hoping that if the element span[class="_a11e76d75 _6b0bd403c"] does not exist, it should just skip all together, why is it still trying to continue to do the code even under an if block? Am I missing anything here?
Just use a try and except to catch the error.
what would you code within the except block? Is it alright just to write pass?
It would be fine to do so or continue.
In case the element is missing selenium driver throws an exception.
In order to make your code working you should use find_elements method.
It returns a list of elements matching the passed locator.
So, in case there are matches the list will contain web elements while in case there will be no matches it will return an empty list while python see non-empty list as a Boolean True and empty list is a Boolean False.
So your code could be as following:
def display_hotel(self):
for hotel in self.hotel_data:
if hotel.find_elements(By.CSS_SELECTOR, 'span[class="_a11e76d75 _6b0bd403c"]'):
hotel_original_price = hotel.find_element(By.CSS_SELECTOR, 'span[class="_a11e76d75 _6b0bd403c"]')
hotel_original_price = hotel_original_price.get_attribute('innerHTML').strip().replace(' ', '')
print(f"Original:\t\t\t{hotel_original_price}")
This works. Just a follow up question, when I use find_elements, it seems to search the entire element / document (not exactly sure) before it skips the non exiting element in the loop
I'm not sure I understand your question, I'm sorry
let me rephrase, I mean it takes quite some time before it skips the loop, for example, if the element span[class="_a11e76d75 _6b0bd403c"] is not existing it'll take some few more seconds before proceeding to go through the next loop iteration. But if span[class="_a11e76d75 _6b0bd403c"] exists, it is fast (instant) to proceed to go to the next loop iteration.
Have you defined the driver.implicitly_wait() in your code?
| common-pile/stackexchange_filtered |
Powershell apply verbosity at a global level
Imagine if you have a script containing a single line of code like
ni -type file foobar.txt
where the -verbose flag is not supplied to the ni command. Is there a way to set the Verbosity at a global PSSession level if I were to run this script to force verbosity? The reason I ask is that I have a group of about 60 scripts which are interdependent and none of these supply -verbose to any commands they issue and I'd like to see the entire output when I call the main entry point powershell script.
$VerbosePreference='Continue' help about_Preference_Variables
Use $PSDefaultParameterValues:
$PSDefaultParameterValues['New-Item:Verbose'] = $true
Set that in the Global scope, and then the default value of -Verbose for the New-Item cmdlet will be $True.
You can use wildcards for the cmdletsyou want to affect:
$PSDefaultParameterValues['New-*:Verbose'] = $true
Will set it for all New-* cmdlets.
$PSDefaultParameterValues['*:Verbose'] = $true
will set it for all cmdlets.
I hadn't heard of $PSDefaultParameterValues before and this definitely works for New-Item. The problem is that I would have to add such a rule for each command that is available in powershell in order to cover those called in these 60 scripts and any future edits
Not true. You can wildcard the cmdlets you want to set. Updated the answer with an example.
I find myself hindered by the powershell versions I'm working with. Any idea how I could achieve same in powershell version 2? http://stackoverflow.com/questions/28808908/alter-behaviour-of-every-cmdlet-in-powershell-session-to-pass-verbose-flag
You could also do:
$global:VerbosePreference = 'continue'
Works better then PSDefaultParameters as it tolerates function not having Verbose param.
| common-pile/stackexchange_filtered |
How to add a class to body element if it's a frontpage in WordPress?
last days I have been writing my own WordPress theme, but I run into another problem. These times I have no clue, how to make it possible.
I would like to add a class to every frontpage on my website. So if a single page becomes a frontpage, it will get another class to body tag like "home".
Almost every premium theme gots this funcion, but I just cant find the solution.
Does anybody have any idea?
Thank you! Stepan
You can add the class in body tag using body_class filter as shown below:
function home_body_class($classes) {
if ( is_front_page() ) {
$classes[] = 'home';
}
return $classes;
}
add_filter( 'body_class', 'home_body_class' );
You can manipulate the condition for the static homepage, blog page and so on.
That's it what I was looking for. Thanks!
| common-pile/stackexchange_filtered |
Mesh is going through random and erratic deformations that only show up when rendering
There appear to be some vertices that are suddenly jumping around all over the place. The weird part is that this only happens in render and not in the viewport. It also only occurs when I'm using the cycles render engine and not eevee. The only modifiers are the armature corrective smooth.
video of what I'm talking about
my blend file
would be helpful if you at least tell us one frame where it happens...
and just a hint: i would try to set correctivesmooth set to inactive to check whether it still happens then. If not - you know the reason. If yes, try to inactivate other things...
| common-pile/stackexchange_filtered |
Generate random symmetric matrix with largest eigenvalue approximately 1
My goal is to generate a positive (entry-wise) matrix $P\in \mathbb{R}_{>0}^{N\times m}$ and then to set $S=PP^T$ such that the largest eigenvalue of $S$ is $\approx 1$ (or equal). Note that if $y$ is a row vector then $ySy^T=(yP)(yP)^T=\|yP\|^2 \geq 0$ so $S$ is positive semidefinite, and since $S$ is also symmetric, all the eigenvalues are thus real and positive.
I am not quick with linear algebra, so I first tried looking into factorizations of a symmetric PSD matrix. For example since $S$ is symmetric and real we can write $S=Q\Lambda Q^T$ where $QQ^T=I$ is an orthogonal matrix whose columns are the eigenvectors of $S$ and $\Lambda$ is a diagonal matrix with the eigenvalues on the diagonal. Since these are real and positive we can write $S=(Q \Lambda^{1/2})(Q \Lambda)^T$ so I tried to start with randomly generating $P=Q\Lambda$ by specifying the eigenvalues to start with $1,\lambda_2,\dotsc$ in decreasing order randomly and $Q$ a random orthogonal matrix. But then $P$ is not necessarily positive, and so neither is $S$ which I need. Any pointers would be greatly appreciated. I repeat the question below for succinctness.
My Question
Is it possible to generate a random symmetric matrix of the form $S=PP^T$ where $P$ is positive entrywise and the largest eigenvalue of $S$ is $1$?
What are the required dimensions of $P$?
@JimmyK4542 $P$ does not necessarily need to be square, but $P\in \mathbb{R}_{>0}^{N\times m}$ where $m\leq N$, and if if can be done with $m>N$ I’d be interested in that too. I’ll add this to the main body of the question too.
One way to generate a random symmetric matrix of the form $S=PP^T$ where $P$ is positive entrywise and the largest eigenvalue of $S$ is $1$ is to let $R$ be an $n \times m$ matrix whose entries are i.i.d. from any distribution whose support is contained in $(0,\infty)$, and then let $P = \tfrac{1}{\sigma_1(R)}R$, where $\sigma_1(R)$ is the largest singular value of $R$. The entries of $R$ are all positive, and it's largest singular value is positive, so the entries of $P$ are all positive. Furthermore, the largest eigenvalue of $RR^T$ is $\sigma_1(R)^2$, so the largest eigenvalue of $S = PP^T = \tfrac{1}{\sigma_1(R)^2}RR^T$ is $1$.
+1 for the prompt response and example, but I guess I did not emphasize “random” enough in the original question.
Thank you! I guess I need to learn more about SVD! This is awesome.
| common-pile/stackexchange_filtered |
Why Exception class isn't abstract?
Once all Exceptions that can occur in our program are from specific concrete sub-classes of Exception class or Error class, then why Exception class isn't defined as abstract?
because you can throw new exception :)
Your statement is not quite correct. All throwable exceptions are subclasses of Throwable. This class is not abstract either and the question is: why should it be? There are no abstract methods in this class.
@Turing85 There is a definite case in favor of OP's suggestion, given the way exceptions were intended to be used. All of Throwable, Excepton, and Error may have been abstract classes. Abstract methods are not relevant here, but instantiation.
@VishalZanzrukia You've got it inside-out: because Exception is not abstract, you can instantiate it.
@MarkoTopolnik I think this is kind of a design philosophy thing. What good is an abstract class if it does not have any abstract methods? You simply deny the user instantiation of these objects without any (good) reason, which is inconvenient (it is different if you specifically deny instantiation when using e.g. a builder pattern).
Already asked here http://programmers.stackexchange.com/questions/119668/abstract-exception-super-type
@Turing85 How do you know there is no good reason? Preventing the programmer from ever throwing a generic checked exception definitely qualifies as a good reason.
@user35443 The question you link to is about C#, with the key distinction that there are no checked exceptions there. I agree that throwing a generic RuntimeException makes sense many times.
@Turing85 Again, you're looking at it the wrong way around. Is there any reason why you should be able to instantiate a raw Exception? I can't see any, can you? So make it abstract then. It's a perfectly valid argument. On a very basic level OOP is about building a model of the world and if something doesn't make sense in the modelled world, it shouldn't be allowed in your model.
all Exceptions that can occur in our program are from specific
concrete sub-classes of Exception class
This is not correct. If in your code you don't need to create a new specific Exception class but you need to through a generic exception you can always do the following:
throw new Exception("Generic Exception");
The same can be said about the class Object. Why isn't Object declared as abstract? Because you can use it directly if needed. For example as a lock for synchronized blocks of code.
i know that we have this permission to do this, but who finally does it? its very vague in my opinion
@TheodoraBaxevani The thing is that this design choice is very, very old. In retrospect, exceptions should probably be designed in another way, but at the time, they probably thought it was a good idea to be able to throw new Exception(). Since it's possible, lots of programs do it. And since Java cares very much about backwards compatibility, they can't change that design anymore: it would break existing code.
There is no reason to make it abstract. The purpose of abstract is to define a skeleton for the subclasses that needs to rewrite abstract methods. Also if an abstract class with only concrete methods is possible why make it abstract?
I must agree with OP that a checked generic exception is one of the greatest stupidities Java allows you.
Davide, abstract classes are those which are not allowed to be instantiated. Having an abstract method is a sufficient, but by no means a necessary condition to prevent instantiation.
A class should be made abstract because it's conceptually the right thing to do, not because there are some abstract methods in it. And conceptually there are some very good arguments for making Exception abstract and none really for making it an instantiable class.
@Marko why an Exception must be subclassed? There is no reason other than adding some useful information to it. The most generic Exception is Exception, otherwyse the right choice should be define Exception as abstract, and add a new concrete class GenericException that is possible to use for all the situations where there is nothing special to add to the Exception that must be thrown. Making Exception not abstract is simpler than adding a new class.
You don't seem to fully comprehend the nature of checked exceptions. Their intention is to warn the caller that a specific kind of trouble can result from the call. Everybody already knows that some kind of trouble can always arise. Only exceptions which have a chance to be meaningfully handled should be checked. What is the difference between an OutOfMemoryError and a generic Exception, in terms of the ability to meaningfully handle it?
@Marko OutOfMemoryError is not an Exception (and also not a generic Error, but a specific error). There are also Exceptions that are not needed to be intercepted (RuntimeException). Who said that the intention is to warn for a specific kind of trouble? If the idea was to block the possibility to through generic Exceptions as you said Exception has been bad designed, because as you said it should be abstract. The idea of the creator of Exception probably was to permit to through an Exception.
You mostly misunderstood my comment. Exception is a checked exception. If you throw it, you must declare throws Exception. That makes no sense with respect to the intention behind the language feature of checked exceptions. No method should ever throws Exception except callback methods which are forced into it by other bad aspects of the design of checked exceptions.
The idea of the creator of Exception probably was to permit to throw an Exception. Yes, but that idea was wrong. It was a bad design choice as it semantically doesn't make sense. As Marko explained, only exceptions you can do something about should be checked exceptions, everything else should be a RuntimeException or an Error. And there's nothing you can do about something as broad as a plain Exception.
| common-pile/stackexchange_filtered |
Custom Vision: Out of upload quota
I have two projects on the F0 tier. This morning they both will not let me upload additional images
150 training images uploaded; 0 remain
and
1162 training images uploaded; 0 remain
The documenation says the limit should be 5,000.
https://learn.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/limits-and-quotas
Can you please add more details about the region.
There is a known issue for F0 limit. Basically recently we have done a pretty big backend change and just deployed that. So this deployment caused the regression. The setting for F0 limit of projects are set wrong.
We Will deploy the fix as soon as possible.
| common-pile/stackexchange_filtered |
causes of Python IOError: [Errno 13] Permission denied
When attempting to write a file, I can get this same error when any of following conditions applies:
The file exists and is marked read-only.
I don't have write permission for the folder and therefore cannot create a
file.
The file is already open in another process.
Is there any way to separate out the above three causes? If not, then I would
say that this is an extraordinarily poor design.
Perhaps check for file permissions and narrow down the possible cases.
You get the same Exception as your base problem is "You can't do this" but the details are contained in the exception instance, (or in a string on older pythons), so if you handle the problem as:
try:
outfile = open('somename.txt', 'w')
except IOError, e:
print "Not allowed", e
You will get a lot more information. (N.B. if you are running python 3 you will need to add brackets to the print above).
This is actually incredibly good design - your basic problem is that there is a problem - and you can find out more if you care to - and you can handle a given class of problem in a generalised manner.
Here's my code:
try:
OUT= open('test.txt', 'w')
OUT.write('junk')
OUT.close()
except IOError as ex:
print("Not allowed because %s." % ex)
If the file test.txt is marked as read-only, I get the following error message:
Not allowed because [Errno 13] Permission denied: 'test.txt'.
If test.txt is locked because I've opened it in an application such as Microsoft Word, I get exactly the same error message. So, I reiterate my position that this is a poor design.
I'm hoping that someone else might be able to add to this discussion.
@PhillipM.Feldman the problem there is that the Operating System only informs applications "Permission Denied" not why permission is denied - you will find the same thing in just about all programming languages not just python - other than a few which just fail silently either simply crashing or worse letting you "write" to a non-existent file and loose your work or in the case of MS Office let you open the file, make changes and then moan when you try to save it.
Long article on this sort of thing on Windows at http://www.online-tech-tips.com/software-reviews/how-to-fix-access-is-denied-file-may-be-in-use-or-sharing-violation-errors-in-windows/
| common-pile/stackexchange_filtered |
Can I use xpath (in lxml) to find the names of tags not known at the start?
I have some xml files I am trying to process. Unfortunately I do not have full access to all of the different elements that constitute all of the possible trees
so for example I might have a document that is structured
<typeOfBook>
<isMystery>True</isMystery>
</typeofBook>
Easy enough, but when I look at the checklist that was used in the initial creation of these files I see categories under the section Type of Book such as Reference Spirituality. Given my experience with the Mystery I try to write an xpath expression
I build my xpath based on this
'//typeofbook/isreferencespirituality/text()'
then I discover that the actual tag they used was isrefspirit thus the correct xpath is
'//typeofbook/isrefspirit/text()'
Given the number of files and possible number of categories I am trying to learn is there is an xpath fishing tool - I would like to run through all of my files once to find all tags after type of book so I can correctly classify the text that is returned
basically I would like to do something like
Run some query on all of my documents to find the * in the following line
'//typeofbook/*/'
'//typeofbook/*' would return all the tags inside the typeofbook tags.
wow on the right track but I dropped it - thanks post as answer and I will credit you.
The * is used as a wildcard so just //typeofbook/*' will get all the child elements inside of the typeofbook tags.
There are a couple of of other things for unknown nodes that you might find useful:
@* # any attribute
node() # any node at all
| common-pile/stackexchange_filtered |
Vorto Dashboard not displaying the device model
while running the vorto dashboard im getting the following error
JWT expired, getting new Token Wed Aug 26 2020 07:38:56 GMT+0100 (BST)... StatusCodeError: 401 -
{"status":401,"error":"gateway:authentication.failed","message":"Multiple authentication
mechanisms were applicable but none succeeded.","description":"For a successful authentication
see the following suggestions: { The JSON Web Token is not valid. },
{ Please provide a valid JWT in the authorization header prefixed with 'Bearer ' }."
The contents of config.json is as follows
{
"client_id": "xxxxxxxxxxx",
"client_secret": "xxxxxxxxxxxx",
"scope": "xxxxxxxxxx",
"intervalMS": 10000
}
Tried with setting the contents of config.json as environment variables. Then also im getting same error. Screenshot of web front end on accessing localhost:8080 is attached
Tried with the following links Error running Vorto Dashboard for Bosch iot suite. But still its not working. Please help me in solving this issue
I think this issue has been formalized here.
@Mena Yeah.. Waiting for its solution... Is there any workaround for this bug.
not that I know of so far unfortunately. It looks more like a change on Things' side since the Vorto dashboard is not often maintained and nothing's changed in there for a while.
Note: by "things" I meant more like Suite Auth since what seems to be broken is the authentication process in use. I'm having a look at what the app does vs the most recent documentation, as soon as I can dig it out...
I think I have a clue why this is breaking. The token returned by the app's call to https://access.bosch-iot-suite.com/token differs from the one you'd get by, e.g. using your OAuth client on https://accounts.bosch-iot-suite.com/oauth2-clients/. Chiefly because it does not contain your scopes. That definitely seems to not work with things APIs. I'm going to throw the question around and fish for answers soon.
@Mena Thankyou for the effort. Expecting to be resolved soon
I have discussed the matter internally to Bosch (disclaimer: I am an employee).
After discussing with the Bosch Suite Auth team, here is a summary of what happened.
The Suite Auth team recently transitioned from Keycloack to Hydra for their authentication technology
The relevant bit here is that previously, the scopes passed to the token request were ignored
The Vorto Dashboard app had been passing the wrong key for the scope parameter all along, when requesting a token, but it was ignored
Now that this parameter is relevant, the (incorrect) notation was not failing to produce a token, but obtained one that was not suitable to authorize with Bosch IoT Things, because it did not contain the appropriate scope
In turn, fixing this key produces a token that successfully authorizes with Bosch IoT Things
If you're in a hurry, you can check out this branch with the fix (it's literally an 8 characters change set).
Otherwise, you can monitor this GitHub ticket for closure - I will close it when the fix is merged to the master branch of the Vorto Examples project.
Now merged to the master branch.
| common-pile/stackexchange_filtered |
Pandas - How do I add all my dataframes to a dictionary
I have multiple dataframes df1, df2, df3 etc.,
The dataframes have no relationships with each other.
Therefore, I am not looking to Append/Join/Merge them and put them as a single dataframe inside a dictionary.
I want to create a dictionary that encloses these dataframes.
dfs = {}
I want to add my df1, df2, df3 inside my dfs.
So that when I need that dataframe I can use something like dfs[df1] to call it back. How Can I do this in Pandas.
{'df'+str(e+1):i for e,i in enumerate(df_list)} ?
@anky_91 sneaking in with the better answer once again ;)
@rahlf23 I need yours. I don't have df1, df2 in real scenario. the key value pair sort of thing might suit me more. Can you please give yours as an answer. I can mark it. Thanks.
@anky_91 maybe dict(enumerate(l)) is good enough :-)
@Wen-Ben Nice one, how do we zip a custom name eg df here? just asking for future ref. :)
If your question is suited towards a key-value pair scenario then it may be conducive to store them as such:
dfs = {'df1': df_one, 'df2': df_two, 'df3': df_three}
Otherwise, I would recommend the answer given by @anky_91:
dfs = {'df'+str(idx+1): i for idx, i in enumerate(df_list)}
| common-pile/stackexchange_filtered |
how to upload an image in comments?
i have a small form in my blog detail view and it has a name,last name,email and an image field. the first three work fine but when i add the imagefield in the form, the form wont save from the page but it works from admin page.
this is my views.py:
def campaign_detail_view(request, id):
template_name = 'gngo/campaign-detail.html'
campaign = get_object_or_404(Campaign, id = id)
comments = CampaignForm.objects.filter(campaign=campaign).order_by('-id')
form = FormCamp(request.POST)
if request.method == 'POST':
if form.is_valid():
name = request.POST.get('name')
last = request.POST.get('last')
email = request.POST.get('email')
comment = CampaignForm.objects.create(campaign=campaign,name=name,last=last,email=email)
comment.save()
return redirect('campaign-detail',id=id)
else:
form = FormCamp()
context = {
'campaign':campaign,
'comments':comments,
'form':form,
}
context["object"] = Campaign.objects.get(id = id)
return render(request, template_name, context)
and this is my comment model:
class CampaignForm(models.Model):
campaign = models.ForeignKey(Campaign, on_delete=models.CASCADE)
name = models.CharField(max_length=100)
last = models.CharField(max_length=100)
email = models.EmailField()
image = models.ImageField(upload_to='images')
this is a non user form, so everyone can fill it. please help me understand how to add the ability to upload an image in this form
oh and this the form:
class FormCamp(forms.ModelForm):
class Meta:
model = CampaignForm
fields = ('name','last','email', 'image',)
THANKS ALOT FOR THE ANSWERS AND SUPPORTS
Where is the image field?
imagine there is one
Without sharing what you have tried it's hard to give an answer...
there you go! i fixed it. but i dont know what to add in views for the image
Instead of using the form to validate and then manually extracting the fields again, you should use the save method of your ModelForm and pass request.FILES to your form when creating it.
And as the campaign is not an editable field, it shall be added after creating the object.
def campaign_detail_view(request, id):
template_name = 'gngo/campaign-detail.html'
campaign = get_object_or_404(Campaign, id = id)
comments = CampaignForm.objects.filter(campaign=campaign).order_by('-id')
if request.method == 'POST':
form = FormCamp(request.POST, request.FILES)
if form.is_valid():
campaign_form = form.save(commit=False)
campaign_form.campaign = campaign
campaign_form.save()
return redirect('campaign-detail',id=id)
else:
form = FormCamp()
context = {
'campaign':campaign,
'comments':comments,
'form':form,
}
context["object"] = Campaign.objects.get(id = id)
return render(request, template_name, context)
https://docs.djangoproject.com/en/2.2/topics/forms/modelforms/#the-save-method
https://docs.djangoproject.com/en/2.2/topics/forms/#the-view
it gives me this error when i do this...NOT NULL constraint failed: gngo_campaignform.campaign_id
@sam I didn't take attention that the campaign is not included in the form, please try this updated version.
the error is gone but it wont save the form... im so sorry about this please help me
@sam Sorry, I also forgot to add request.FILES at form submission... Edited to show it.
Try this:
def campaign_detail_view(request, id):
template_name = 'gngo/campaign-detail.html'
campaign = get_object_or_404(Campaign, id = id)
comments = CampaignForm.objects.filter(campaign=campaign).order_by('-id')
form = FormCamp(request.POST, request.FILES)
if request.method == 'POST':
if form.is_valid():
comment = form.save(commit=False)
comment = CampaignForm.objects.create(campaign=campaign,name=name,last=last,email=email)
comment = request.FILES['image']
comment.save()
return redirect('campaign-detail',id=id)
else:
form = FormCamp()
context = {
'campaign':campaign,
'comments':comments,
'form':form,
}
context["object"] = Campaign.objects.get(id = id)
return render(request, template_name, context)
class FormCamp(forms.ModelForm): to this;
class FormCamp(forms.Form):
Don't forget to add enctype=multipart/form-data in your form in template.
it tells me that the name 'name' is not defined
@Sam.. Where is your error pointing at on template? Check it and let me know
ok it saves but it gives me this error about the image:
'TemporaryUploadedFile' object has no attribute 'save'
the name,last and email save correctly
@Sam.. I just updated my answer, check if it still show same error.
yeah it still shows the same 'TemporaryUploadedFile' object has no attribute 'save' error
I added these 3 lines because of the last error which was that the name is not defined:
name = request.POST.get('name')
last = request.POST.get('last')
email = request.POST.get('email')
then it gave me the TemporaryUploadedFile' object has no attribute 'save' error
@Sam.. Change this class FormCamp(forms.ModelForm): to class FormCamp(forms.Form): tell me what you got.
i dont think that this is the problem.
Let us continue this discussion in chat.
| common-pile/stackexchange_filtered |
Cast a shadow with HDR world lighting?
I have quickly created a bed in Blender and I'm using HDR lighting on the World node in Cycles, is it possible to create a shadow underneath the bed without adding more 'light' to the object?
So basically, the bed looks exactly like the below but it has shadows cast underneath / around it.
Place a plane under the bed, so there is some geometry to cast the shadow on. You can also chcek shadow catcher in the object info tab to get only the shadow and not the actual plane
Thanks, is it also possible to make the world lighting not cast shadows and then use a light source to cast shadows instead?
| common-pile/stackexchange_filtered |
getElementsByTagName doesn't work
I have next simple part of code:
String test = "<?xml version="1.0" encoding="UTF-8"?><TT_NET_Result><GUID>9145b1d3-4aa3-4797-b65f-9f5e00be1a30</GUID></TT_NET_Result>"
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
Document doc = dbf.newDocumentBuilder().parse(new InputSource(new StringReader(test)));
NodeList nl = doc.getDocumentElement().getElementsByTagName("TT_NET_Result");
The problem is that I don't get any result - nodelist variable "nl" is empty.
What could be wrong?
You're asking for elements under the document element, but TT_NET_Result is the document element. If you just call
NodeList nl = doc.getElementsByTagName("TT_NET_Result");
then I suspect you'll get the result you want.
Thank you. I totaly overlooked document element.
Here's another response to this old question. I hit a similar issue in my code today and I actually read/write XML all the time. For some reason I overlooked one major fact. If you want to use
NodeList elements = doc.getElementsByTagNameNS(namespace,elementName);
You need to parse your document with a factory that is namespace-aware.
private static DocumentBuilderFactory getFactory() {
if (factory == null){
factory = DocumentBuilderFactory
.newInstance();
factory.setNamespaceAware(true);
}
return factory;
}
| common-pile/stackexchange_filtered |
What is "fundamental" in physics?
Sorry about the broad question. I'm still learning to frame the questions on Physics StackExchange. Currently researching the nature of interactions in philosophy.
My question is: When physicists use the term "fundamental", what do they mean?
In philosophy, most seem to claim that to be fundamental means to be the source of causal power. That is, to say that quarks are fundamental means that if we can find exactly how quarks interact, we can explain all phenomena in the world because everything is made up of quarks after all (the behavior of quarks is the primal cause for all phenomena). And philosophers also tend to handpick findings of physical sciences to support this claim.
I sense that this might be an incorrect picture and want to understand what fundamental means in physics to be able to clearly write why we might be using a mistaken notion of fundamental.
At present for particle physics this graph of links shows how fundamental is used:
Go to the link to open each elipse.
We start with what are called fundamental forces, which are exchange forces with their accompanying coupling constants. These are the strong, electromagnetic weak and gravity.
So fundamental is used as the simplifying (conceptually and mathematically) and organizing concept for the great plethora of data from the large number of elementary particles and interactions that have been observed.
A (slight) problem here is that the fundamental forces are believed to be expression of a grand unified force which is as-yet not fully described but would be the actual causal explanation of the forces. Much of the use of "fundamental" depends on the problem context rather than any strict thinking about causal powers and explanations.
@AndersSandberg note I do not claim causal powers I say "as the simplifying (conceptually and mathematically) and organizing concept" . It is the current understanding of fundamental, which may change, or people's beliefs
Also, there is nothing here about the question of the nature of spacetime, its possible emergence from, well, more "fundamental" physics. Gravity is only addressed by its supposedly coming quantification here depicted by the graviton ellipse, but this does not reflect the depth of the questions involved, nor the width in scope of the many different approaches to the problem of unifying QM and relativity.
@StéphaneRollandin I took the simple use of fundamental in courses of particle physics. Sure one can look for "more fundamental" this is just the taught status at this moment.
@AndersSandberg That comes closer to the problem I was trying to state! I am trying to find readings- people who might be talking about fundamentality, causality, of their relation to each other. With your comment, I will think a bit more about the presupposition of a grand unified force. Thank you!
@annav The mention of "simplifying (conceptually and mathematically) and organizing concept" helped me quite a bit. I feel it indicates an epistemological side to the situation which is often missed by the philosophy-community. Thank you again for pointing it out!
@StéphaneRollandin I have almost-zero background in physics- but I am learning more and more about the theories, about their scope and limitations- and most importantly, the philosophical implications. Do you feel that there could be a claim "This is it. This is the most fundamental particle/force. We are probably not going to find anything more fundamental."? It would be very helpful if you could tell me what you feel "fundamental" would mean here. Thank you again!
@SahanaRajan. This deserves a long and comprehensive discussion that I am afraid I cannot afford, and which would definitely be off-topic here (especially in the comment section). But there are plenty of fragments of that discussion already available here on PSE, if you search for "fundamental", "nature of/what is" (spacetime, energy, light, etc). You need to look for the material you are interested in. As for my own view, in short, I like very much the take of Stephen Talbott (not a physicist himself though) on reductionism: http://natureinstitute.org/txt/st/mqual/ch04.htm
@StéphaneRollandin Thank you for sharing the tags that I can use! Will look them up. Will also check up on Talbott. :)
| common-pile/stackexchange_filtered |
Why should we use DataTemplate.DataType
When I am creating a Resource we are specifying the DataType inside it:
<Window.Resources>
<DataTemplate x:Key="StudentView"
DataType="this:StudentData">
<TextBox Text="{Binding Path=StudentFirstName, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}"
Grid.Row="1"
Grid.Column="2"
VerticalAlignment="Center" />
<TextBox Text="{Binding Path=StudentGradePointAverage}"
Grid.Row="2"
Grid.Column="2"
VerticalAlignment="Center" />
</DataTemplate>
<Window.Resources>
And while binding :
<ItemsControl ItemsSource="{Binding TheStudents}"
ItemTemplate="{StaticResource StudentView}">
So why are we using the DataType, even if I remove the DatType , my sample runs fine. Is it restricting certain types , that can be inside DataTemplete?
But I tried binding one of the TextBox with a garbage value (Not present in the View-Model) and it works fine!
One advantage is that knowing the expected data context type allows for some static verification of whether binding paths are valid. It's also a documentation hint to future developers of the intention of the DataTemplate.
The DataType is for implicit application, if you drop the x:Key you do not need to reference it in the ItemsControl.ItemTemplate for example. Read the documentation.
This property that is very similar to the TargetType property of the Style class. When you set this property to the data type without specifying an x:Key, the DataTemplate gets applied automatically to data objects of that type. Note that when you do that the x:Key is set implicitly. Therefore, if you assign this DataTemplate an x:Key value, you are overriding the implicit x:Key and the DataTemplate would not be applied automatically.
| common-pile/stackexchange_filtered |
How do I run my XBOX XNA game without a network connection?
I need to demo my XBOX XNA game in college. The college doesn't allow this type of device to connect to the network. I deployed my game to the Xbox and it is sitting in the games list along with my other games. It runs fine with a network connection but when its offline it comes up with an error message saying its needs a connection to run the game.
This makes no sense, the game is deployed on the Xbox memory, it must be some security policy or something!
Is there any way around this? The demo is on monday!
Basically you're out of luck. XBLIG games, whether published or unpublished, require an Internet connection to run. I see two alternatives:
1) Use a mobile phone to set up your own mini-network and somehow give your Xbox an Internet connection.
2) Use a laptop to demo your game instead. XNA games run on PC. Xbox controllers can be plugged into a PC via USB (if your controllers are wireless you need to buy a receiver). Laptops can be plugged into external displays, if that's a needed feature (and most colleges should be well-equipped for plugging laptops into projectors already).
The laptop suggestion seems the be the best solution.
That's how I've seen them presented at college expos before. You can't really even tell the difference once it's on a TV screen and in game with the controllers.
sorry for not getting back, yeah i got lucky i was allowed connect it directly for like 5 minutes and that did the job. I was aware of the laptop option, i dont have one and also i wanted to show it running on the intended platform! the mobile phone idea was great though, would have tryed that! thanks a lot guys!
| common-pile/stackexchange_filtered |
How to handle RESULT_CANCELED state of registerForActivityResult in ViewModel
I want to perform an operation on canceled state of registerForActivityResult in ViewModel from the MainActivity.
override val resultLauncher =
registerForActivityResult(ActivityResultContracts.StartActivityForResult()) { result ->
val tag = "Result Launcher"
if (result.resultCode == Activity.RESULT_OK) {
val data: Intent? = result.data
val resp = AuthorizationResponse.fromIntent(data!!)
val ex = AuthorizationException.fromIntent(data)
resp?.let { response ->
response.authorizationCode?.let { code ->
appLoggerRepresentable.log.debug(tag, "Auth code: $code")
}
loginUtility.renewToken(response)
}
ex?.let {
appLoggerRepresentable.log.debug(tag, "Auth Exception: $it")
}
} else {
appLoggerRepresentable.log.debug(
tag,
"Result != Activity.RESULT_OK, need to handle failure."
)
}
}
And this is my interface
interface LoginUtilityRepresentable {
fun signOff(context: Context)
fun presentLogin(activityToPresentFrom: ActivityWithLoginLaunchersRepresentable, callback: (Boolean, Throwable?) -> Unit)
fun renewToken(resp: AuthorizationResponse)
}
And here is the code to handle RESULT_OK state. Now I want handle here RESULT_CANCELED status and perform my own action.
weakReferenceActivity?.get()?.let { activity ->
loginUtility.presentLogin(activity as MainActivity) { didLogin, _, exception ->
if (didLogin) {
loggerRepresentable.log.debug(
mTAG,
"Needs Login, user logged in with SSO, checking location data."
)
navHostController.navigate(Screens.MAINMENU.screenName)
hideProgressBar()
} else {
showErrorMessage(exception)
hideProgressBar()
}
}
}
| common-pile/stackexchange_filtered |
Class within structure
I want to define the object of a class within a structure and access function members of the class. Is it possible to achieve this?
With the following code I am getting a segmentation fault at ps_test->AttachToInput(2145);. I can't figure out the reason, everything looks correct to me:
class test
{
public:
test();
virtual ~test();
int init_app(int argc, char* argv[]);
virtual void AttachToInput(int TypeNumber, int DeviceNo=0);
}
struct capture
{
test h_app;
gint port;
};
main()
{
struct capture h_cap;
test *ps_test = &h_cap.h_app;
ps_test->AttachToInput(2145);
}
What's the problem you've encountered?
You can't call AttachToInput because it's protected, so only derived classes can use it. (It doesn't work the same way as in Java.) Is that your question?
First of all, the only difference between a class and a struct in C++ is that a class' members are private by default and a struct's members are public by default. Compiler-generated ctors and dtors are visible in both cases - unless otherwise stated by the programmer (e.g. they move the default ctor into a private section). Otherwise construction and destruction of instances of user-defined types marked class wouldn't be possible without explicit, public declaration - thus defying the very purpose of compiler-generated functions.
So basically, what you do in your example is merely composition of two user defined types which is perfectly legal. When you create an instance of capture, an instance of test is created as well.
What you can't do is publicly access AttachToInput() from outside of test and derived types of test. You need to declare the function public in order for this line to compile:
h_cap.h_app.AttachToInput(); // error: member function of `test` is protected
On another, unrelated note (but I came across it so I mention it), your class test holds a raw pointer to char. Holding raw pointers is ok, if the lifetime of the entity that's being pointed is guaranteed to exceed the lifetime of the object that holds the pointer. Otherwise, it's very likely the object itself is responsible for the destruction of said entity. You need to be sure about who owns what and who's responsible for allocation and deallocation of stuff.
EDIT: It should be noted, that Alan Stokes proposed the same in the comment section while I wrote this answer. :)
EDIT2: Slight oversight, implicit default access is also assumed for base classes depending on how the derived class is declared. See What are the differences between struct and class in C++?.
Hello Thokra,
Thanks for your help. It clarified most of my doubts. I have updated my question now. Can you take a look at it and help me resolve this issue. Good day.
@RajuBabannavar: The segfault you get is not due to the call to AttachToInput(). I suggest you debug into the implementation of the function and make sure it's not accessing invalid addresses. With a stub implementation of AttachToInput(), your example works fine - as it should.
| common-pile/stackexchange_filtered |
Editing files locally with Vagrant
Is it possible to edit files using non command-line editors, such as Notepad++, with Vagrant?
If so, how would it be done?
If your editor can do that, it is possible to edit code on your box via SSH:
:e scp://user@host//some/directory/ in [G]Vim,
use the built-in SSH plugin in Notepad++.
But your box uses the directory where you issued $ vagrant up as a shared directory: put your files there and you'll we be able to access them with CLI editors from your box and with your host's GUI editors.
So I would have to move every file back and forth with every edit?
No, that directory is where you put your project and where you work.
| common-pile/stackexchange_filtered |
Jasper - No secret found for "XXXXX" key in "net.sf.jasperreports.data.adapter" category
We use jasper with java 1.7 for report generation. It works fine with Java 1.7 version. Gets the below specified exception after java version updation to Java1.8.
Issue
No secret found for "XXXXXX" key in "net.sf.jasperreports.data.adapter" category.
Here "XXXXXX" is my database password.
My Database configuration is:
<?xml version="1.0" encoding="UTF-8" ?>
<jdbcDataAdapter class="net.sf.jasperreports.data.jdbc.JdbcDataAdapterImpl">
<name>DataAdapter</name>
<driver>org.postgresql.Driver</driver>
<username>XXXX_user</username>
<password>XXXXX</password>
<savePassword>true</savePassword>
<url>jdbc:postgresql://XXXXXXX:5432/XXXXXdb</url>
<database></database>
<serverAddress></serverAddress>
</jdbcDataAdapter>
Unable to figuring out solution to the issue. Can anybody help to fix this issue?
Your Java version should have nothing to do with your issue. Did you also upgrade the JasperReports library?
Yes, I did it by updating Current version of Jasperreport is 6.4.0 to 6.5.1.
So you're saying that in JR 6.4.0 your report with that adapter was working and that in JR 6.5.1 it does not work anymore?
Yes. Report does not work in JR 6.5.1 and Java 1.8.
But did it work in JR 6.4.0?
No It did not JR 6.4.0 , Java 1.8. but Works in JR 6.4.0 and Java 1.7
Your issue should not be Java version related. It has to do with JasperReports configuration for handling passwords in data adapter files.
Passwords stored in plain text in data adapter files may raise security concerns.
That's why JasperReports relies on a net.sf.jasperreports.util.SecretsProvider implementation for resolving passwords. This SecretsProvider needs to be plugged in through the extension mechanism. No provider is plugged in by default.
For production use, a proper SecretsProvider that decrypts passwords needs to be implemented and registered on your side.
For basic testing purposes you can register the built-in extension net.sf.jasperreports.util. IdentitySecretsProviderExtensionsRegistryFactory by adding the following configuration to a jasperreports_extension.properties in the root of your classpath:
net.sf.jasperreports.extension.registry.factory.identity.secrets.provider=net.sf.jasperreports.util.IdentitySecretsProviderExtensionsRegistryFactory
net.sf.jasperreports.extension.identity.secrets.category.da=net.sf.jasperreports.data.adapter
| common-pile/stackexchange_filtered |
Rails searches the wrong database for model data when using multiple databases
I am building an application in Rails which requires me to access multiple SQL databases. The resources I found online suggested I use ActiveRecord::Base.establish_connection, however, this seems to interfere with my site's models. After connecting to a different database, when I next do <model>.<command>, it gives me Mysql2::Error: Table '<Database I had to access through establish_connection>.<Model's table>' doesn't exist eg: Mysql2::Error: Table 'test.words' doesn't exist , which means rails tries to look for the table associated with its models in the database I had to access through establish_connection instead of the site development database.
Steps to reproduce error:
Here are some steps I found which seem to reproduce the problem;
First, I create a new rails app:
rails new sqldbtest -d mysql
cd sqldbtest
Then I set the config file:
default: &default
adapter: mysql2
encoding: utf8mb4
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
username: root
password: <omitted>
host: localhost
Then create a controller, model, and some data in the mysql database:
rails generate controller test test
rails generate model Word name:string txt:text
rails db:create
rails db:migrate
rails c
cat=Word.new
cat.name="Cat"
cat.txt="Cat."
cat.save
exit
mysql -u root -p # I already have a database called "test".
use test;
create table extst (id int primary key, name varchar(8), txt text);
insert into extst (id,name,txt) values (0,"Bob","Bob.");
quit
I then made the controller and the view:
class TestController < ApplicationController
def test
itemOne=Word.find_by(name:"Cat")
@textOne=itemOne.txt
con=ActiveRecord::Base.establish_connection(adapter: 'mysql2', encoding: 'utf8mb4', username: 'root', password: <omitted>, host: 'localhost', database: 'test').connection
@textTwo=con.execute('select txt from extst where name="Bob"').to_a[0][0]
end
end
I wrote this in the view:
<%= @textOne %><br>
<%= @textTwo %>
added 'root "test#test"' to config/routes.rb
rails s
Result:
When I load the page, it shows "Cat." and "Bob." on separate lines as expected, but when I refresh, It shows the error as described above.
I have tried adding con.close to the controller, but this does not work.
Since this error comes from making a connection to another database, adding ActiveRecord::Base.establish_connection(adapter: 'mysql2', encoding: 'utf8mb4', username: 'root', password: <omitted>, host: 'localhost', database: 'sqldbtest_development').connection to the controller after @textTwo=con.execute('select txt from extst where name="Bob"').to_a[0][0] stops the error from happening, however I don't know if this is the best practice or if it has any side effrcts. For example, the database will need to be changed when moving to test or production.
This solution is most likely just a temporary workaround.
| common-pile/stackexchange_filtered |
PHP - preg_replace_callback function
I have a preg_replace_callback function and when i open my webpage, i get the following warning:
Warning: preg_replace_callback(): Requires argument 2,
'stripslashes(strstr("\2\5","rel=\class=") ? "\1" :
This is my function:
function ace_colorbox_replace($string) {
$pattern = '/(<a(.*?)href="([^"]*.)'.IMAGE_FILETYPE.'"(.*?)><img)/ie';
$result = 'stripslashes(strstr("\2\5","rel=\class=") ? "\1" : "<a\2href=\"\3\4\"\5 rel=\"colorbox\" class=\"colorbox\"><img")';
return preg_replace_callback($pattern, $callback, $string);
}
Can someone please help me?
Thanks
Br Robert
The second argument of preg_replace_callack must be a function, not a variable.
Thanks and how would you write the whole function then?
Replace $callback by function($m) { body of the function here }. See the doc for examples
The function you wrote is not complete
in preg_replace_callback($pattern, $callback, $string); you have $pattern defined all right but $callback variable is not defined
You have to define it or use a constant
Also you have the $result variable which is not used so it's unnecessary
This should be a comment.
Oh yes sry, this was a copying mistake
function ace_colorbox_replace($string) {
$pattern = '/(<a(.*?)href="([^"]*.)'.IMAGE_FILETYPE.'"(.*?)><img)/ie';
$callback = 'stripslashes(strstr("\2\5","rel=\class=") ? "\1" : "<a\2href=\"\3\4\"\5 rel=\"colorbox\" class=\"colorbox\"><img")';
return preg_replace_callback($pattern, $callback, $string);
}
Please, edit your question if you want to add some info.
| common-pile/stackexchange_filtered |
A Strange Sequence
This puzzle is taken from a French bestseller I read years ago. The solution is much simpler than it looks...
Find the following line in this sequence :
1
1 1
2 1
1 2 1 1
1 1 1 2 2 1
3 1 2 2 1 1
An old question, but here's a variant: what starting sequence can go the most steps before reaching a length of ten?
Say it out loud, and count the numbers!
So the steps would be:
One 1; two 1's; one 2 one 1; and so one. The next would be 1 3 1 1 2 2 2 1.
| common-pile/stackexchange_filtered |
Google Colab recent error: OSError: libcudart.so.10.2: cannot open shared object file: No such file or directory
I was successfully running the PyTorch Geometric Graph Classification popular example Google Colab notebook event last week: https://colab.research.google.com/drive/1I8a0DfQ3fI7Njc62__mVXUlcAleUclnb?usp=sharing
But today (Mon 10/11/2021) running the same is throwing error " OSError: libcudart.so.10.2: cannot open shared object file: No such file or directory ". This probably coming from executing the following "from torch_geometric.datasets import TUDataset". (I already made sure the runtime type is selected as GPU.)
| common-pile/stackexchange_filtered |
How to read the file line by line having matching string into list
I want to read a given file line by line and print the lines with matching string and append them into a list.
Hello, welcome to Stack Overflow. Please read https://stackoverflow.com/help/how-to-ask and https://stackoverflow.com/help/mcve
with open(file_name) as f: \n your_list=[line for line in f if your_word in line]
Reading a file line-by-line in an efficent way can be doen using the "with" statment:
with open(filename, 'r') as f:
for line in f:
if line in wordlist:
//do something
the with statement ensure that the opened resources (the file in this case) will be closed.
iterating over line with a for loop is readable and simple.
if line in wordlist: likely is wrong since that sounds like comparing a string of words to a list of words. More likely to be something like if any(word in line for word in wordlist): if there is a list words to compare against a string of words. Or you could use set arithmetic to compare a set of the words in line to a set of the words to look for...
content = []
for path, dirs, files in os.walk(dir_name):
for file in files:
name = os.path.join(path,file)
print (name)
data = open(name,"r")
for line in data:
if re.match("(.*)TOTAL(.*)",line):
content.append(line)
print(content)
| common-pile/stackexchange_filtered |
SAP HANA : Input parameter filter value '*' issue
I want to introduce a problem that we are facing in our project regarding Input parameter filtering issue.
Problem:
We have 5 input parameters in our SAP HANA view with default value ‘*’ to have the possibilities to select all values.
Now when we want to select data from this HANA view into our table function using script we pass input parameter values using “PLACEHOLDER” statement but for this statement ‘*’ is not working( it returns no result).
More important point is this that if I hard code value as ‘’, it is showing the data correctly but if I use variable (that holds ‘’ value), it shows me no data.
For example:
For plant (WERKS) filter, if I put constant ‘*’, it is giving me all data
For plant (WERKS) filter, if I put use a variable (ZIN_WERKS) that have ‘*’ value passed from input screen of final view, it is giving me no data.
I checked that variable is correctly filled with ‘*’ value but still no data that we are not able to understand.
Additional question, do we always give default value as ‘*’ for input parameters because if it is blank or empty, it always filter on blank values and value help could also not be generated?
Have you ever encounter these issues because it seems very basic points in SAP HANA…?
We would really appreciate for any help/hint regarding these issues…
This is indeed a question that has been asked already. The point here is that you seem to want to mimic the selection behaviour from SAP Netweaver based applications in your HANA models.
One difference to consider here is that the placeholder character on SQL databases like HANA is not * but %.
Also, the placeholder search only works when your model uses the LIKE comparison, but not with = (equal) or >, <, or any other combination of range queries.
In short: if you want to have this specific behaviour just like in SAP Netweaver, you will have to build your own scripted view and explicitly test for which parameters had been provided and which are "INITIAL".
One useful feature for this scenario is the APPLY_FILTER() function in SQLScript, that allows to apply dynamic filters in information models.
More on that can be found in the modelling guide.
| common-pile/stackexchange_filtered |
How do I asscociate a payment transaction with a user while consuming the MPESA Express (STKPUSH) api v1
I am using the MPESA-Express also called the STK push api V1 to receive payments from my clients.
To get the customer paying, I am looking for the PhoneNumber value in the results body of the response if the payment is successful. This way I can associate a payment with a customer.
However now that we'll be having data minimisation on the MPesa api, the PhoneNumber will not be displayed fully, and I am facing a challenge of how to associate a payment transaction with a client. I have tried setting the AcccountReference in the request as shown below, but I can't get this AccountReference back in the response results body. I was thinking of setting a unique AcccountReference for each customer.
The data I am sending to the endpoint https://sandbox.safaricom.co.ke/mpesa/stkpush/v1/processrequest
$postData = json_encode([
"BusinessShortCode" => Yii::$app->params['businessShortCode'],
"Password" => $this->createMpesaRequestsPassword($timestamp),
"Timestamp" => $timestamp,
"TransactionType" => $transactionType,
"Amount" => $amount,
"PartyA" => $phoneNumber,
"PartyB" => Yii::$app->params['businessShortCode'],
"PhoneNumber" => $phoneNumber,
"CallBackURL" => $callBackUrl,
"AccountReference" => $phoneNumber,
"TransactionDesc" => $transactionDesc
]);
On my callback url I get this response:
{
"Body": {
"stkCallback": {
"MerchantRequestID": "9183-42212949-1",
"CheckoutRequestID": "ws_CO_23072022133552132714385056",
"ResultCode": 0,
"ResultDesc": "The service request is processed successfully.",
"CallbackMetadata": {
"Item": [
{
"Name": "Amount",
"Value": 1
},
{
"Name": "MpesaReceiptNumber",
"Value": "QGN2XSH6MQ"
},
{
"Name": "Balance"
},
{
"Name": "TransactionDate",
"Value":<PHONE_NUMBER>3617
},
{
"Name": "PhoneNumber",
"Value":<PHONE_NUMBER>11
}
]
}
}
}
}
How do know which transaction belongs to which user?
This is too late to respond but maybe for who ever may be looking for a similar answer.
1st, you initiate the STK Push: This can be done using a submit button.
<?php
if(isset($_POST['mpesastk'])){
$app_id = mysqli_real_escape_string($conx, $_POST['app_id']);// Value to be updated in a different table during the mpesa callback url process.
$amount = '1'; //Amount to be paid
$phone = mysqli_real_escape_string($conx, $_POST['pay_phone']); //Phone Number
$config = array(
"env" => "sandbox",
"BusinessShortCode"=> "174379",
"key" => "", //Enter your consumer key here
"secret" => "", //Enter your consumer secret here
"username" => "apitest",
"TransactionType" => "CustomerPayBillOnline",
"passkey" => "bfb279f9aa9bdbcf158e97dd71a467cd2e0c893059b10f78e6b72ada1ed2c919", //Enter your passkey here
"CallBackURL" => "", //Must have SSL When using localhost, Use Ngrok to forward the response to your Localhost
"AccountReference" => "Name to appear.",
"TransactionDesc" => "Payment of X Fee for ",
);
$phone = (substr($phone, 0, 1) == "+") ? str_replace("+", "", $phone) : $phone;
$phone = (substr($phone, 0, 1) == "0") ? preg_replace("/^0/", "254", $phone) : $phone;
$phone = (substr($phone, 0, 1) == "7") ? "254{$phone}" : $phone;
$access_token = ($config['env'] == "live") ? "https://api.safaricom.co.ke/oauth/v1/generate?grant_type=client_credentials" : "https://sandbox.safaricom.co.ke/oauth/v1/generate?grant_type=client_credentials";
//$access_token = "https://sandbox.safaricom.co.ke/oauth/v1/generate?grant_type=client_credentials";
$credentials = base64_encode($config['key'] . ':' . $config['secret']);
$ch = curl_init($access_token);
curl_setopt($ch, CURLOPT_HTTPHEADER, ["Authorization: Basic " . $credentials]);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$response = curl_exec($ch);
curl_close($ch);
$result = json_decode($response);
$token = isset($result->{'access_token'}) ? $result->{'access_token'} : "N/A";
$timestamp = date("YmdHis");
$password = base64_encode($config['BusinessShortCode'] . "" . $config['passkey'] ."". $timestamp);
$curl_post_data = array(
"BusinessShortCode" => $config['BusinessShortCode'],
"Password" => $password,
"Timestamp" => $timestamp,
"TransactionType" => $config['TransactionType'],
"Amount" => $amount,
"PartyA" => $phone,
"PartyB" => $config['BusinessShortCode'],
"PhoneNumber" => $phone,
"CallBackURL" => $config['CallBackURL'],
"AccountReference" => $config['AccountReference'],
"TransactionDesc" => $config['TransactionDesc'],
);
$data_string = json_encode($curl_post_data);
//$endpoint = "https://sandbox.safaricom.co.ke/mpesa/stkpush/v1/processrequest";
$endpoint = ($config['env'] == "live") ? "https://api.safaricom.co.ke/mpesa/stkpush/v1/processrequest" : "https://sandbox.safaricom.co.ke/mpesa/stkpush/v1/processrequest";
$ch = curl_init($endpoint );
curl_setopt($ch, CURLOPT_HTTPHEADER, [
'Authorization: Bearer '.$token,
'Content-Type: application/json'
]);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data_string);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$response = curl_exec($ch);
curl_close($ch);
$result = json_decode(json_encode(json_decode($response)), true);
if(!preg_match('/^[0-9]{10}+$/', $phone) && array_key_exists('errorMessage', $result)){
$errors['phone'] = $result["errorMessage"];
}
if($result['ResponseCode'] === "0"){
$MerchantRequestID = $result['MerchantRequestID'];
$CheckoutRequestID = $result['CheckoutRequestID'];
$sql = "INSERT INTO mpesastk(mpesastk_appid,mpesastk_phone,mpesastk_amount,CheckoutRequestID,MerchantRequestID)
VALUES('$app_id','$phone','$amount','$CheckoutRequestID','$MerchantRequestID')";
if ($conx->query($sql) === TRUE){
//Response to user
$err_color = "success";
$err_title = "SUCCESS!";
$err_message = '<h4><font color="#fff">Payment of X fee was sent to your phone.</font></h4>';
header("refresh:15;");
}else{
$errors['database'] = "Unable to initiate your order: ".$conx->error;;
foreach($errors as $error) {
$err_message .= $error . '<br />';
}
}
}else{
$err_color = "error";
$err_title = "ERROR!";
$err_message = '<h4><font color="#fff">Failed to send Payment Request of X fee to your phone.</font></h4>';
header("refresh:3;");
}
}
?>
Note that
$CheckoutRequestID = $result['CheckoutRequestID'];
$app_id = mysqli_real_escape_string($conx, $_POST['app_id']);
was inserted. This value will be used by CallBack URL to update the database accordingly.
Now the CallBack URL
<?php
echo '<a href="../../">Home<br /></a>';
$content = file_get_contents('php://input'); //Receives the JSON Result from safaricom
$res = json_decode($content, true); //Convert the json to an array
$dataToLog = array(
date("Y-m-d H:i:s"), //Date and time
" MerchantRequestID: ".$res['Body']['stkCallback']['MerchantRequestID'],
" CheckoutRequestID: ".$res['Body']['stkCallback']['CheckoutRequestID'],
" ResultCode: ".$res['Body']['stkCallback']['ResultCode'],
" ResultDesc: ".$res['Body']['stkCallback']['ResultDesc'],
" MpesaReceiptNumber: ".$res['Body']['stkCallback']['CallbackMetadata']['Item'][1]['Value'],
);
$data = implode(" - ", $dataToLog);
$data .= PHP_EOL;
file_put_contents('mpesastk_log', $data, FILE_APPEND); //Create a txt file and log the results to our log file
//Saves the result to the database
//Change the values accordingly to your system setup
$conn=new PDO("mysql:host=localhost;dbname=dbname","root","password");
$conn->setAttribute(PDO::ATTR_ERRMODE,PDO::ERRMODE_EXCEPTION);
$stmt = $conn->query("SELECT * FROM mpesastk ORDER BY mpesastk_id DESC LIMIT 1");
$stmt->execute();
$rows = $stmt->fetchAll(PDO::FETCH_ASSOC);
foreach($rows as $row){
$mpesastk_id = $row['mpesastk_id'];
$app_id = $row['mpesastk_appid'];//remember this, it will be
$ResultCode = $res['Body']['stkCallback']['ResultCode'];
$ResultDesc = $res['Body']['stkCallback']['ResultDesc'];
$MpesaReceiptNumber = $res['Body']['stkCallback']['CallbackMetadata']['Item'][1]['Value'];
if($res['Body']['stkCallback']['ResultCode'] == '1032'){//if transaction canceled
$sql = $conn->query("UPDATE mpesastk SET mpesastk_status = '0',ResultCode = '$ResultCode',
ResultDesc='$ResultDesc',MpesaReceiptNumber='$MpesaReceiptNumber' WHERE mpesastk_id = $mpesastk_id");
$rs = $sql->execute();
}else{//if transaction was paid
$sql = $conn->query("UPDATE mpesastk SET mpesastk_status = '1',ResultCode = '$ResultCode',
ResultDesc='$ResultDesc',MpesaReceiptNumber='$MpesaReceiptNumber' WHERE mpesastk_id = $mpesastk_id");
$rs = $sql->execute();
//Now update a different table in the database
// Not the $app_id as set in the submit :)
$asql = $conn->query("UPDATE tblX SET tblX_status = '3' WHERE tblX_id = $app_id");
$ars = $asql->execute();
}
if($rs){
file_put_contents('error_log', "Records Inserted", FILE_APPEND);;
}else{
file_put_contents('error_log', "Failed to insert Records", FILE_APPEND);
}
}
?>
Happy Coding
Mpesa sends you two responses with the same 'callbackrequestID'. The immediate one can be saved in a cache memory of your choice attaching your product/user to it and when you receive the payment completion, use it to match the cache data and update the database. Add security layers and whitelist the given ips from Safaricom to avoid getting fake callback responses from elsewhere. In development you may not have control of ips since you are using services like Ngrok or smee.io
Other methods may be hashing your own values and setting them as part of the callback URL so you have unique callback URL for each transaction for matching users or products.
| common-pile/stackexchange_filtered |
Is there a word for this semi-continuity property?
Let $f \colon \mathbb{R} \to \mathbb{R}$ be a function. Recall that $f$ is upper-semicontinuous if
$$ \text{for any } x \in \mathbb{R}, \quad f(x) \geq \inf_{\varepsilon > 0} \sup_{|y-x| < \varepsilon} f(y). \tag{$\dagger$}$$
Note that if $g \colon \mathbb{R} \to \mathbb{R}$ is continuous and $f$ is obtained from $g$ by changing the value of $g$ at a single point, that is
$$
f(x) =
\begin{cases}
g(x) &\text{ if } x \neq x_0, \\
y_0 &\text{ if } x = x_0,
\end{cases}
$$
where $y_0 > g(x_0)$, then $f$ is upper-semicontinuous. This strikes me as a slightly undesirable behaviour, since an upper-semicontinuous function cannot be reconstructed from its values at $\mathbb{R} \setminus \{x_0\}$. Hence, I'm curious if there is an established name for the following stronger property:
$$ \text{for any } x \in \mathbb{R}, \quad f(x) = \inf_{\varepsilon > 0} \sup_{0<|y-x| < \varepsilon} f(y). \tag{$\ddagger$}$$
Any references would be highly appreciated. Note that I'm primarily interested in terminology that is already established, the task of coming up with a new name would be inherently opinion-based.
Do you really want equality in ($\ddagger)$
@KaviRamaMurthy - Yes, I think so.
Just an observation: continuous functions satisfy this property, but the reverse implication is not true, since e.g. the function $f(x) = \sin(\frac1x)$ for $x\neq0$ and $f(0) = 1$ also has this property.
@DejanGovc Thanks for the interesting example! The canonical example I had in mind are functions with jump discontinuities. Another non-trivial example is the characteristics function of anything that's closed and has no isolated points, such as the Cantor set. One can also construct a function that is strictly positive on all rationals and 0 elsewhere which satisfies the condition.
| common-pile/stackexchange_filtered |
Nexus 7 Android Java VideoView, Can't play this video
i tried to stream a video on my nexus 7 (android 4.2) with the videoview lib, but my nexus 7 displays "Can't play this Video" when i start the app, hope you can help.
my sourcecode:
package com.test.prog;
import android.app.Activity;
import android.net.Uri;
import android.os.Bundle;
import android.widget.Button;
import android.widget.LinearLayout;
import android.widget.VideoView;
public class MainActivity extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Uri viduri=Uri.parse("http://www.law.duke.edu/cspd/contest/finalists/viewentry.php?file=docandyou");
VideoView video=(VideoView)findViewById(R.id.videoview);
video.setVideoURI(viduri);
}
}
and the layout
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:orientation="horizontal"
tools:context=".MainActivity" >
<VideoView
android:id="@+id/videoview"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:layout_alignParentLeft="true"
/>
</RelativeLayout>
LogCat says:
11-19 17:46:08.504: D/VideoView(16804): Error: 1,0
11-19 17:46:19.504: D/MediaPlayer(16804): Couldn't open file on client side, trying server side
11-19 17:46:19.504: E/MediaPlayer(16804): Unable to to create media player
regards
christian
Android won't play everything. Check http://developer.android.com/guide/appendix/media-formats.html
i tried much videos with different types but no one works on my nexus 7, did you have a working video sample for videoview?
I have the same issue on the same device. I've added an onPrepare listener to call VideoView.start and I get audio but no video. Did you ever manage to solve this?
`video.start`
after you set --> `$`video.setVideoURI(viduri);
i can run video on my nexus 7 4.1.2 version with same code of yours
| common-pile/stackexchange_filtered |
Battery won't hold a charge. 2001 Ford Mustang
2001 Ford Mustang V6. Bought brand new battery in Nov 2013. I tested the core of the alternator at the same time as buying the battery and it tested good. I also switched out a relay in the engine to see if that would help.
I can jump start the car but it still will not hold the charge after turning the car off. any ideas?
How long does it take for it to drain (or lose charge)?
Maybe it's a defective battery. You may want to go have it checked. Also, sometimes batteries sit on the shelf awhile before they're sold. Was it properly charged before you put it in the car?
I just recently had a new battery go bad. It didn't seem to hold charge real well, car would barely start after sitting for 2 weeks (never had a problem leaving it for even 2 months before). Just a few days ago it failed completely, cooking off acid (smells horrible) and overheating. The battery was only 9 months old...
| common-pile/stackexchange_filtered |
Emailing attachments with SMTP Relay
I currently have one server relaying emails to one main mail server, but when I try to mail something with an attachment, it comes up as:
Content-type:text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Email message
Content-Type: application/octet-stream; name="test.pdf"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="test.pdf"
Is there anyway to fix this?
try this
<?php
//define the receiver of the email
$to =<EMAIL_ADDRESS>//define the subject of the email
$subject = 'Test email with attachment';
//create a boundary string. It must be unique
//so we use the MD5 algorithm to generate a random hash
$random_hash = md5(date('r', time()));
//define the headers we want passed. Note that they are separated with \r\n
$headers = "From<EMAIL_ADDRESS><EMAIL_ADDRESS>//add boundary string and mime type specification
$headers .= "\r\nContent-Type: multipart/mixed; boundary=\"PHP-mixed-".$random_hash."\"";
//read the atachment file contents into a string,
//encode it with MIME base64,
//and split it into smaller chunks
$attachment = chunk_split(base64_encode(file_get_contents('attachment.zip')));
//define the body of the message.
ob_start(); //Turn on output buffering
?>
--PHP-mixed-<?php echo $random_hash; ?>
Content-Type: multipart/alternative; boundary="PHP-alt-<?php echo $random_hash; ?>"
--PHP-alt-<?php echo $random_hash; ?>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
Hello World!!!
This is simple text email message.
--PHP-alt-<?php echo $random_hash; ?>
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
<h2>Hello World!</h2>
<p>This is something with <b>HTML</b> formatting.</p>
--PHP-alt-<?php echo $random_hash; ?>--
--PHP-mixed-<?php echo $random_hash; ?>
Content-Type: application/zip; name="attachment.zip"
Content-Transfer-Encoding: base64
Content-Disposition: attachment
<?php echo $attachment; ?>
--PHP-mixed-<?php echo $random_hash; ?>--
<?php
//copy current buffer contents into $message variable and delete current output buffer
$message = ob_get_clean();
//send the email
$mail_sent = @mail( $to, $subject, $message, $headers );
//if the message is sent successfully print "Mail sent". Otherwise print "Mail failed"
echo $mail_sent ? "Mail sent" : "Mail failed";
?>
Rolling your own MIME encoding in PHP, as lemirage suggests, will work. But it may be simpler to use phpmailer. phpmailer is easy to use for sending messages with attachments, and simple to setup - just a few PHP files to copy to your server. See https://github.com/PHPMailer/PHPMailer
| common-pile/stackexchange_filtered |
multiplier to the width and height in objective-c
How to give multiplier to the width and height of the UIImageView .
I need to add the constraint to the image with have a height of 200*400-1x,400*1600-2x .I have set the constraints in the 12.9 iPad .
Please check this link . https://stackoverflow.com/questions/35455194/calculating-aspect-ratio-for-all-sizes-of-iphone/35455631#35455631
pls go through : https://developer.apple.com/library/content/documentation/UserExperience/Conceptual/AutolayoutPG/AnatomyofaConstraint.html
Where does x come from?
Do it programmatically.
Create an IBoutlet for your constraints.
Update your constraint accordingly.
yourConstraint.constant = 200*400-1x
This constraint give the image view 0.7 of the width of the view
[self.view addConstraint:[NSLayoutConstraint constraintWithItem:self.imageView
attribute:NSLayoutAttributeWidth
relatedBy:NSLayoutRelationEqual
toItem:self.view
attribute: NSLayoutAttributeWidth
multiplier:0.7
constant:0]];
| common-pile/stackexchange_filtered |
How to add a string to both sides of a found substring
I have an input string:
10 birds have found 5 pears and 6 snakes.
How can I put html tags around substrings found with a regular expression?
For instance, I want all numbers to be bold, like this:
<b>10</b> birds have found <b>5</b> pears and <b>6</b> snakes.
That's not difficult, have you tried anything?
When you're asking for specific help with debugging something, you need to show what you've tried.
I have tried, still don't which PHP functions to use. To find the substring is not hard, to put string in front and behind is harder. I tried combination of preg_mach and str_pad() but it wasnt the best solution.
preg_replace_callback()
@LarsStegelitz: this function is not needed preg_replace should suffice.
You can try use preg_replace() function:
$str = '10 birds have found 5 pears and 6 snakes.';
echo preg_replace('/(\d+)/', '<b>$1</b>', $str);
Output:
<b>10</b> birds have found <b>5</b> pears and <b>6</b> snakes.
For god sake. I forgot I can use the found string as variable. That was it. Thanks a lot.
Without regex you can do that:
$items = explode(' ', $text);
$result = array_reduce($items, function ($c, $i) {
return ($c ? $c . ' ' : $c)
. (is_numeric($i) ? '<b>' . $i . '</b>' : $i);
});
@PiotrOlaszewski: Because this way, when it's possible to use it, is 2x faster for this kind of string length.
| common-pile/stackexchange_filtered |
Is double Q-learning redundant when using target networks?
Generally speaking, the purpose behind target networks is to reduce the impact of current changes on the model. i.e. if I performed action a and got some reward r, I want to reduce the impact of this specific tuple on the model.
When using double-q learning, I keep a different model for action choosing and reward estimation, also to make the model more robust.
But, and here is my question, generally speaking, both of those methods delays some samples to gain robustness, in slightly different scheme, and as I see it they are solving the same problem. So are they redundant?
No, double Q-learning is not redundant, since that is not the main motivation for double Q-learning. The abstract of the paper says
In particular, we first show that the recent DQN algorithm, which
combines Q-learning with a deep neural network, suffers from
substantial overestimations in some games in the Atari 2600 domain.
And then
We propose a specific adaptation to the DQN algorithm and show that
the resulting algorithm not only reduces the observed overestimations,
as hypothesized, but that this also leads to much better performance
on several games.
So a side-effect of DDQN is to mitigate the "moving target" problem, which the target network also solves. However, that is not the main point. The main point is to reduce over-optimism
| common-pile/stackexchange_filtered |
Apostrophes at the beginning of stanzas in Byron's "The Giaour"
My question is about Byron's The Giaour and the opening apostrophe at the beginning of a stanza. For example:
'His floating robe around him folding,
Slow sweeps he through the columned aisle;
With dread beheld, with gloom beholding
The rites that sanctify the pile
But when the anthem shakes the choir,
And kneel the monks, his steps retire;
By yonder lone and wavering torch
His aspect glares within the porch;
There will he pause till all is done -
And hear the prayer, but utter none
...
Why do some stanzas begin with an apostrophe, whereas others don't?
It's not an apostrophe but an opening quotation mark, paired with a closing quotation mark at the end of the stanza. If I am understanding rightly, this stanza is spoken by a monk into whose monastery the Giaour has come, and he is describing the Giaour's behaviour. (Hence e.g. his invocation of St Francis later in the stanza.)
In at least one early edition the typographical convention is different and more explicit, with every single line of the stanza preceded by a quotation mark.
(This isn't something you asked about but may also be worth mentioning: Some lines in this poem do in fact begin with apostrophes -- there are several beginning "'Tis". This is just an abbreviated version of "It is".)
Thanks a lot for your response Gareth! The topic of narrator in The Giaour is very interesting.
| common-pile/stackexchange_filtered |
pyparsing scanString with spaces not able to parse
I am using below regex expression (with pyparsing), which doesn't give any output. Any idea what I am doing wrong here.
>>> pat = pp.Regex('\s+\w+')
>>> x = " *** abc xyz pqr"
>>> for result, start, end in pat.scanString(x):
print result, start, end
if \s is removed. We get the data
>>> pat = pp.Regex('\w+')
>>> x = " *** abc xyz pqr"
>>> for result, start, end in pat.scanString(x):
print result, start, end
['abc'] 8 11
['xyz'] 14 17
['pqr'] 20 23
Do you actually want the leading spaces in your data? Or did you think you had to include the \s+ in your pattern because regex?
According to this, whitespaces are skipped by default in pyparsing.
During the matching process, whitespace between tokens is skipped by default (although this can be changed).
But Regex class inherits from ParserElement which has a leaveWhitespace() method.
leaveWhitespace(self) source code
Disables the skipping of whitespace before matching the characters in
the ParserElement's defined pattern. This is normally only used
internally by the pyparsing module, but may be needed in some
whitespace-sensitive grammars.
So this code works :
>>> pat = pp.Regex('\s+\w+')
>>> pat.leaveWhitespace()
>>> x = " *** abc xyz pqr"
>>> for result, start, end in pat.scanString(x):
print result, start, end
[' abc'] 4 11
[' xyz'] 11 17
[' pqr'] 17 23
| common-pile/stackexchange_filtered |
Mail not sent using gmail smtp
Here I want to send mail using gmail smtp.But I shows error
The SMTP server requires a secure connection or the client was not
authenticated. The server response was: 5.5.1 Authentication Required
on button click insted of sending mail.
html
<asp:TextBox ID="txtfrom" runat="server"></asp:TextBox>
<asp:TextBox ID="txtfrompassword" runat="server"></asp:TextBox>
<asp:TextBox ID="txtto" runat="server"></asp:TextBox>
<asp:TextBox ID="txtbody" runat="server"></asp:TextBox>
<asp:Button ID="Button1" runat="server" Text="Button" OnClick="Button1_Click" />
code behind
protected void Button1_Click(object sender, EventArgs e)
{
MailMessage msg = new MailMessage(txtfrom.Text,txtto.Text);
msg.Body = txtbody.Text;
SmtpClient sc = new SmtpClient("smtp.gmail.com", 587);
sc.Credentials = new NetworkCredential(txtfrom.Text, txtfrompassword.Text);
sc.EnableSsl = true;
sc.Send(msg);
Response.Write("send");
}
Possible duplicate of mail sending with network credential as true in windows form not working
You can use port 25
SmtpClient sc = new SmtpClient("smtp.gmail.com", 25);
| common-pile/stackexchange_filtered |
How do I export a JSON file with morph targets from Maya?
When I export my model from Maya using the Three.js exporter I can see no option to export morph targets. Here is what is shown in the 'Export Selection Options' window,
When I export my model and open up the JSON file, the contents are as follows,
I found the following example on Threejs.org - Morph Targets Human
But they have exported their JSON file from 3ds Max - JSON File
Is the only option currently to export from 3ds Max so that morph targets can be included in the JSON file?
| common-pile/stackexchange_filtered |
Can't connect to postgres in docker with port mapping using sequelize
I am running a postgres image in a docker container and am trying to connect to it from another container using docker-compose. When I use the standard port of 5432 I am able to connect fine, but when I try to use a nonstandard port along with a port mapping I am getting a ECONNREFUSED error.
Here is my compose file:
networks:
production-net:
driver: bridge
services:
conn-test:
depends_on:
- db
environment:
DB_DIALECT: postgres
DB_HOST: db
DB_NAME: db-name
DB_PASSWORD: pass
DB_PORT: '54321'
DB_USER: user
image: my-image
networks:
production-net: null
db:
environment:
PGDATA: /pgdata
POSTGRES_DB: db-name
POSTGRES_PASSWORD: pass
POSTGRES_USER: user
image: postgres
networks:
production-net: null
ports:
- 54321:5432/tcp
version: '3.0'
And here is the output when I run docker-compose up
Creating network "composer_production-net" with driver "bridge"
Creating composer_db_1 ... done
Creating composer_conn-test_1 ... done
Attaching to composer_db_1, composer_conn-test_1
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | fixing permissions on existing directory /pgdata ... ok
db_1 | creating subdirectories ... ok
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | creating configuration files ... ok
db_1 | running bootstrap script ... ok
db_1 | performing post-bootstrap initialization ... ok
conn-test_1 | Connecting to database
db_1 | syncing data to disk ...
db_1 | WARNING: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 | ok
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | pg_ctl -D /pgdata -l logfile start
db_1 |
db_1 | waiting for server to start....2019-02-28 20:52:02.208 UTC [41] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2019-02-28 20:52:02.534 UTC [42] LOG: database system was shut down at 2019-02-28 20:51:58 UTC
db_1 | 2019-02-28 20:52:02.618 UTC [41] LOG: database system is ready to accept connections
db_1 | done
db_1 | server started
db_1 | CREATE DATABASE
db_1 |
db_1 |
db_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db_1 |
db_1 | waiting for server to shut down....2019-02-28 20:52:04.355 UTC [41] LOG: received fast shutdown request
db_1 | 2019-02-28 20:52:04.432 UTC [41] LOG: aborting any active transactions
db_1 | 2019-02-28 20:52:04.436 UTC [41] LOG: background worker "logical replication launcher" (PID 48) exited with exit code 1
db_1 | 2019-02-28 20:52:04.436 UTC [43] LOG: shutting down
db_1 | 2019-02-28 20:52:04.853 UTC [41] LOG: database system is shut down
db_1 | done
db_1 | server stopped
db_1 |
db_1 | PostgreSQL init process complete; ready for start up.
db_1 |
db_1 | 2019-02-28 20:52:04.941 UTC [1] LOG: listening on IPv4 address "<IP_ADDRESS>", port 5432
db_1 | 2019-02-28 20:52:04.941 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2019-02-28 20:52:05.091 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2019-02-28 20:52:05.342 UTC [59] LOG: database system was shut down at 2019-02-28 20:52:04 UTC
db_1 | 2019-02-28 20:52:05.419 UTC [1] LOG: database system is ready to accept connections
conn-test_1 | Error during start up process
conn-test_1 | connect ECONNREFUSED <IP_ADDRESS>:54321
conn-test_1 | { SequelizeConnectionRefusedError: connect ECONNREFUSED <IP_ADDRESS>:54321
conn-test_1 | at connection.connect.err (/app/node_modules/sequelize/lib/dialects/postgres/connection-manager.js:116:24)
conn-test_1 | at Connection.connectingErrorHandler (/app/node_modules/pg/lib/client.js:140:14)
conn-test_1 | at emitOne (events.js:116:13)
conn-test_1 | at Connection.emit (events.js:211:7)
conn-test_1 | at Socket.reportStreamError (/app/node_modules/pg/lib/connection.js:71:10)
conn-test_1 | at emitOne (events.js:116:13)
conn-test_1 | at Socket.emit (events.js:211:7)
conn-test_1 | at emitErrorNT (internal/streams/destroy.js:66:8)
conn-test_1 | at _combinedTickCallback (internal/process/next_tick.js:139:11)
conn-test_1 | at process._tickCallback (internal/process/next_tick.js:181:9)
conn-test_1 | name: 'SequelizeConnectionRefusedError',
conn-test_1 | parent:
conn-test_1 | { Error: connect ECONNREFUSED <IP_ADDRESS>:54321
conn-test_1 | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)
conn-test_1 | errno: 'ECONNREFUSED',
conn-test_1 | code: 'ECONNREFUSED',
conn-test_1 | syscall: 'connect',
conn-test_1 | address: '<IP_ADDRESS>',
conn-test_1 | port: 54321 },
conn-test_1 | original:
conn-test_1 | { Error: connect ECONNREFUSED <IP_ADDRESS>:54321
conn-test_1 | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)
conn-test_1 | errno: 'ECONNREFUSED',
conn-test_1 | code: 'ECONNREFUSED',
conn-test_1 | syscall: 'connect',
conn-test_1 | address: '<IP_ADDRESS>',
conn-test_1 | port: 54321 } }
conn-test_1 | Shutting down the application ...
I've confirmed that the docker host is resolving to the correct IP address, but it won't connect.
Can anyone point out what I am missing here?
When you're communicating directly between containers, you use the port number the service inside the target container is listening on. If you have a ports: declaration or a docker run -p option, it's the second port number; but in this setup, unless you want to access the service from outside of Docker space, that setting is strictly optional.
So you should set DB_PORT: '5432' to point at the "normal" PostgreSQL port, even though you've published it to the host on a different port number.
I see. So I should not be using ports: at all since I want this to live entirely in Docker space.
| common-pile/stackexchange_filtered |
Why do universal properties require a unique isomorphism?
I am a novice in the field of category theory, and one of the things I struggle to wrap my head around is the notion of universal properties. Precisely, I struggle to understand why universal properties all seem to be stated in terms of the existence of a unique morphism between objects, instead of just at least one morphism.
Now, I understand that the very idea of a universal property is to define an object up to isomorphism via a certain property. In a sense, this property becomes the definition of the object. But more specifically, universal properties define objects up to unique isomorphism. What I don't understand is why we want unique isomorphisms between these objects, instead of at least one isomorphism. What would be lost by not having a unique isomorphism between objects that satisfy a property?
Note that I understand how requiring a unique isomorphism means that objects that satisfy the same universal property are isomorphic. But is it necessary?
I've read this question whose top rated answer explains the terminology, but doesn't really explain why uniqueness up to unique isomorphism is useful or interesting or desirable. There is also this question that is very similar to mine, and whose answer tries to justify why the unicity of the isomorphism matters, but I'm not really convinced by the explanation. Again, wouldn't simply an isomorphism uniquely characterise the object in question?
Edit 1:
It has been pointed out to me in the comments that universal properties state that there exists a unique morphism that makes a certain diagram commute, and that it is the uniqueness of this morphism that is important. Not so much the fact that it gives unique isomorphisms between satisfying objects.
Thinking about it more, I realised that if the morphism given by a universal property wasn't unique, then the objects that satisfy that property wouldn't necessarily be isomorphic to one another. Is this the reason why the uniqueness of that morphism is important?
The example in your second link is not a good one, in that, as pointed out in one of the comments there, the uniqueness in the universal property of fields of fractions is not actually necessary. But that is a special case. Your question is a bit mixed up about the role of uniqueness: what is asserted to be unique is the morphism to or from the universal object that makes a certain diagram commute and it is the uniqueness of that morphism that is important. The fact that this gives unique isomorphisms between different candidates for the universal objects is useful but not central.
@RobArthan Thank you for your comment. Yes, I could have worded my question more clearly. I will try to rephrase it better. I was indeed asking about the uniqueness of the morphism.
You say that it is the important part of universal properties, which I can easily believe, but I'm not sure I understand why. Then again, without it, it seems that objects that satisfy the same universal property are not necessarily isomorphic, so maybe this is why the uniqueness of the morphism is important?
Slightly disagreeing with @RobArthan, in my own experience, the uniqueness-up-to-unique-isomorphism of a thing specified by a universal property is very important: two different constructions (e.g., to prove existence) invariably yield the same thing. "Things are what they have to be." :) True, this disallows "automorphisms" of universal objects (in a certain sense), but, upon closer examination, that's usually fine. :) The thing that finally convinced me about this uniqueness feature was W. Rudin's "definition" of the (appropriate) topology on distributions... seemed... [cont'd]
... [cont'd] unmotivated and needlessly messy, until I realized that he was describing a construction of a colimit, etc. :) Then proved (as a sequence of unexplained lemmas) the properties of a colimit. :)
One thing that points to the importance of the uniqueness of the morphisms: In applications, we will very often want to use that unique morphism in further constructions. And generally, mathematicians are much more comfortable with the notion of taking "the unique thing which satisfies some property" than with the notion of "choose some arbitrary thing out of the collection of multiple things which satisfy some property, and fix that choice for all time".
@paulgarrett: I really meant central to the universal property as a concept: I agree that uniqueness of the isomorphisms between different candidates is a very important consequence of the universal property.
@RobArthan, I suspected so, but didn't want anyone to over-interpret your remark. :)
You already noted that the idea of universal properties is to characterize an object by how it relates to other objects. For definiteness, let's look at the universal property of the coproduct of topological spaces. Given two spaces $X$ and $Y$, the idea is to characterize the coproduct $X\sqcup Y$ by stating that, for any space $Z$, a map $X\sqcup Y\to Z$ should be the same data as two maps $X\to Z$ and $Y\to Z$. It is this idea of ''the same data'' that forces us to require uniqueness of the morphism in the universal property. To see this, suppose for a moment that we remove uniqueness from the universal property of the product. Then (if $X$ and $Y$ are nonempty) any space of the form $X\sqcup Y\sqcup W$ also satisfies this modified universal property: two morphisms $X\to Z$ and $Y\to Z$ give us possibly many different ways to define a map $T\colon = X\sqcup Y\sqcup W\to Z$ extending our given maps. This means that specifying a map out of $T$ requires an unspecified amount of extra data than just two maps out of $X$ and $Y$. Indeed, we may vary our choice of $W$ and will then still satisfy the modified universal property, so we don't even know what extra data we need to specify a map out of $T$. So without uniqueness in the universal property, we don't actually know how an object satisfying it relates to other objects, which defeates the point of a universal property. Without uniqueness, we only have a very general idea that mapping in or out of it requires more data than something else, but we just can't work with this amount of vagueness.
As for the question why we would like unique isomorphisms between objects, let me answer a slightly different question: why do we want preferred/canonical isomorphisms between objects, rather than just any isomorphism? The reason is that even though isomorphisms allow us to see two objects as similar, it can be dangerous to actually identify them without having a canonical isomorphism. I will give three examples.
We do not actually identify in general a finite-dimensional vector space $V$ with its dual $V^*$ even though they are abstractly isomorphic, because there are many incompatible choices for such an isomorphism and no preferred one. If you choose for any vector space $V$ an isomorphism $\varphi_V\colon V\to V^*$, then in some sense you are picking a specific basis of $V$ to work in, instead of any other basis. So if you decide to identify $V$ and $V^*$ using $\varphi_V$, this means that you are in a sense not working in a theory of linear algebra which studies intrinsic properties of vector spaces, but in a theory that studies also properties coming from a possibly weird (and incompatible between different spaces) choice of bases.
Consider three spaces $X$, $Y$ and $Z$, and the two products $(X\times Y)\times Z$ and $X\times(Y\times Z)$. We often just write $X\times Y\times Z$ for either space and we will not run into problems doing this. The reason is that there is a preferred and natural isomorphism $\alpha_{X,Y,Z}\colon (X\times Y)\times Z\to X\times(Y\times Z)$ between both objects. If you are mean, you could give for a specific $X$, $Y$ and $Z$ a stupid isomorphism between these objects, but once we are going to look at products of four spaces, say $X\times Y\times Z\times W$, your stupid isomorphisms will not produce a single isomorphism between $(X\times Y)\times (Z\times W)$ and $((X\times Y)\times Z)\times W)$, but will produce multiple different isomorphisms between these. This is because there are different ways to use associativity to move from one way of bracketing to another way of bracketing. In other words, using stupid isomorphisms to obtain associativity in the product of three spaces, you don't know anymore how to identify different bracketings in products of four spaces, and this is a problem. The problem is completely resolved if you just stick to your preferred isomorphisms $\alpha_{X,Y,Z}$ that the universal property of the product gave you.
Automorphisms of an object are its symmetries. If you could just use any isomorphism to identify two objects, you are sort of saying that we can pretend that any symmetry of an object is the identity symmetry. This is not true (we can often deduce information about an object by studying interesting symmetries, which would not be possible if all symmetries were essentially just the identity symmetry).
The unique isomorphisms between objects satisfying a universal property are needed because they give us canonical isomorphisms between these objects, and thus allow us to identify all objects satisfying that universal property. This allows us to pretend there is just one empty space, and just one one-point space, and just one product of two spaces, and allows us to deduce (coherent) symmetry and associativity of the product of spaces, etc.
Great answer +1. I will point out that what you say about products becomes nuanced in categories like the homotopy category, where the objects are sets and the product objects are product sets, but the morphisms aren't just special kinds of set-theoretic functions. This leads to the fascinating world of Stasheff polytopes.
This is a very clear and enlightening answer, and it clears my conceptual doubts on the matter. Thank you for the effort you put into writing this thorough answer!
| common-pile/stackexchange_filtered |
Unramified constituent of Weil representation of $U(2)$
Let $E/F$ be a quadratic extension of local field of characteristic zero.
Let $\omega$ be the quadratic character of $F^{\times}$ associated to $E/F$ by local class field theory and $\gamma:E^{\times} \to \mathbb{C}$ be a unitary character whose restriction to $F^{\times}$ is $\omega$.
Let $W_2$ be a $n$-dimensional hermitian space over $E$ and $U(W_2)$ its isometry unitary group.
Let $\Omega_{2,\gamma}$ be the Weil representation of $U(W_2)$ associated to $\gamma$.
Then is it true that the unramified piece of $\Omega_{2,\gamma}$ is the unramified constituent of the representation induced from the character $\gamma$ of the Borel group of $U(W_2)$?
I don't know why it does true. Is there any reference regarding this?
Thank you in advance.
| common-pile/stackexchange_filtered |
Tablet Pressure with Python on Linux
Is there a simple way to get the pen pressure data from a usb tablet using python on Linux?
You can do it by reading input events on the input device node. I wrote some modules to do this. you can find it in the Pycopia project.
The disadvantage of this is that your program must run as root.
The powerdroid project also uses this, but that's old code now. You can see another example of synthesizing touch input in the devices module. It probably won't work anymore, but you might start with that.
Is there a simple example of how I could access the pressure of a tablet using Pycopia?
@KevinGurney I updated the answer with pointers to a couple of examples.
Try using PySide, it's a QT Wrapper here: QTabletEvent.
Or you can use Python and PyGame: Here.
Are there any other alternatives which don't use QT or PyGame?
QTabletEvent is problematic for some Watcom tablets.
| common-pile/stackexchange_filtered |
Hook error in Route Container component using useParams
I am using a route like this:
import React from 'react';
import { useParams } from 'react-router-dom';
import { connect } from 'react-redux';
import { bindActionCreators } from 'redux';
import AdminBank from 'views/Bank/Bank';
import CustomerBank from 'views/CustomerBank/CustomerBank';
const mapDispatchToProps = dispatch => ({
actions: bindActionCreators({}, dispatch),
});
const BankContainer = (props) => {
const { userType } = useParams();
return (
userType === "admin" ? <AdminBank/> : <CustomerBank/>
)
}
export default connect(
BankContainer,
mapDispatchToProps
)(BankContainer);
The Bank component is a redux container component. Inside I have a conditional render based off the user type.
const BankContainer = (props) => {
const { userType } = useParams();
return (
userType === "admin" ? <Bank/> : <CustomerBank/>
)
}
I get a react warning and then a react error
Warning:
Warning: React has detected a change in the order of Hooks called by ConnectFunction. This will lead to bugs and errors if not fixed. For more information, read the Rules of Hooks
Previous render Next render
------------------------------------------------------
1. useMemo useMemo
2. useMemo useMemo
3. useContext useContext
4. useMemo useMemo
5. useMemo useMemo
6. useMemo useMemo
7. useReducer useReducer
8. useRef useRef
9. useRef useRef
10. useRef useRef
11. useRef useRef
12. useMemo useMemo
13. useContext useLayoutEffect
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
in ConnectFunction (at Admin.js:68)
in Route (at Admin.js:68)
in Switch (at Admin.js:63)
in div (at Admin.js:59)
in Admin (created by ConnectFunction)
in ConnectFunction (at ProtectedRoute.js:13)
in Route (at ProtectedRoute.js:29)
in ProtectedRoute (at ProtectedRoute.js:46)
in ProtectedAdminRoute (created by ConnectFunction)
in ConnectFunction (at Main.js:46)
in Switch (at Main.js:45)
in Router (created by BrowserRouter)
in BrowserRouter (at Main.js:44)
in Main (created by ConnectFunction)
in ConnectFunction (at src/index.js:63)
in App (at src/index.js:69)
in Provider (at src/index.js:68)
Error:
Error: Invalid hook call. Hooks can only be called inside of the body of a function component. This could happen for one of the following reasons:
1. You might have mismatching versions of React and the renderer (such as React DOM)
2. You might be breaking the Rules of Hooks
3. You might have more than one copy of React in the same app
my package.json versions:
"react-redux": "^7.2.0",
"react-router-dom": "5.2.0",
"react": "16.13.1",
"react-dom": "16.13.1",
Thing's I've tried:
Checked for react bundler issues
Switched to view rather than just a react container
Tried other hooks, still same issue
it is unclear where you render BankContainer? is BankContainer the same as Bank?
My apologies, yes Bank and BankContainer are the same component.
Do you call useLayoutEffect anywhere?
Also Bank has redux's Provider in it ?
That's why I'm confused, I've never used that hook anywhere in this project. Just useEffect. I know the useContext comes from react-router. Not sure where the useLayoutEffect is coming from.
component is surrounding my component.
Added full container code to original post, hopefully that makes it a bit more clear.
Ok, you are passing the BankContainer to mapStateToProps argument of connect function which is definitely wrong
You don't even pull any state from the store in this component. You don't need connect here. Also consider using useDispatch and useSelector instead of connect if you can.
Ya I was setting up a container to pull from state but ya you're very right I setup the container completely wrong for whatever reason just for this one. Thanks so much that was the issue!
You are passing the BankContainer to mapStateToProps argument of connect function which is definitely wrong. You should something else...
Ya for sure, I just created a map function instead. I have no idea why I passed the component lol. Also will look into useSelector, thanks a bunch again
so eh... can you mark it as answer? also checkout redux hooks and redux toolkit, connect is sort of considered obsolete at this point.
oh ya my bad dude
Hi as @ThatAnnoyingDude just say in the first answer this correct react-redux connect get 2 arguments and they are
mapStateToProps - bring you back as props the requested state from redux
mapDispatchToProps - bring you back as props the requested action from redux
if you don't want to pass one of them you should do like this:
export default connect(
null,
mapDispatchToProps
)(BankContainer);
work the same for the second one.
| common-pile/stackexchange_filtered |
Getting the useful words from a word list
I have the following strings:
Over/Under 1.5
Over/Under 2.5
Over/Under 3.5
Over/Under 4.5
This is not Over/Under 1.5
Other text
For me the valid texts are the following
Over/Under 1.5
Over/Under 2.5
Over/Under 3.5
Over/Under 4.5
Over/Under X.X
where the X.X is a number.
How can I make a decision weather is a valid string or not?
Does This es not Over/Under 1.5 match? If so:
$words = array('Over/Under 1.5',
'Over/Under 2.5',
'Over/Under 3.5',
'Over/Under 4.5',
'This es not Over/Under 1.5',
'Other text');
foreach ($words as $word) {
if (preg_match('#.*Over/Under \d\.\d.*#', $word)) {
echo "Match $word\n";
}
}
If not, change the preg_match to
preg_match('#^Over/Under \d\.\d$#', $word);
Like @Tokk writes, if the string should match on Over OR Under, then you need to change to an OR - |
preg_match('#^(Over|Under) \d\.\d$#', $word);
Use a regular expression. I'm not very good at them myself, but this worked for me:
$t
mp = array
(
"Over/Under 1.5",
"Over/Under 2.5",
"Over/Under 3.5",
"Over/Under 4.5",
"Over/Under 5.5",
"fdgdfgdf",
"Other"
);
$tmp2 = array_filter($tmp, function($element) { return preg_match('(Over\/Under [0-9]\.[0-9])', $element); });
print_r($tmp2);
http://php.net/manual/en/function.preg-match.php for more info
Check if it matches the Regex
Over/Under [0-9]*.[0-9]*
(if it is Over OR Under, choose (Over|Under)
if(preg_match('/Over\/Under \d\.\d/', $str)) {
}
This is a nice solution, but I want to get false for this string 'Half-time Totals Over/Under 0.5', but I'm getting true.
| common-pile/stackexchange_filtered |
ConditionExpression for PutItem not evaluating to false
I am trying to guarantee uniqueness in my DynamoDB table, across the partition key and other attributes (but not the sort key). Something is wrong with my ConditionExpression, because it is evaluating to true and the same values are getting inserted, leading to data duplication.
Here is my table design:
email: partition key (String)
id: sort key (Number)
firstName (String)
lastName (String)
Note: The id (sort key) holds randomly generated unique number. I know... this looks like a bad design, but that is the use case I have to support.
Here is the NodeJS code with PutItem:
const dynamodb = new AWS.DynamoDB({apiVersion: '2012-08-10'})
const params = {
TableName: <table-name>,
Item: {
"email": { "S": "<email>" },
"id": { "N": "<someUniqueRandomNumber>" },
"firstName": { "S": "<firstName>" },
"lastName": { "S": "<lastName>" }
},
ConditionExpression: "attribute_not_exists(email) AND attribute_not_exists(firstName) AND attribute_not_exists(lastName)"
}
dynamodb.putItem(params, function(err, data) {
if (err) {
console.error("Put failed")
}
else {
console.log("Put succeeded")
}
})
The documentation https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html says the following:
attribute_not_exists (path)
True if the attribute specified by path does not exist in the item.
Example: Check whether an item has a Manufacturer attribute.
attribute_not_exists (Manufacturer)
it specifically says "item" not "items" or "any item", so I think it really means that it checks only the item being overwritten. As you have a random sort key, it will always create a new item and the condition will be always true.
Any implementation which would check against a column which is not an index and would test all the records would cause a scan of all items and that is something what would not perform very well.
Here is an interesting article which covers how to deal with unique attributes in dynamodb https://advancedweb.hu/how-to-properly-implement-unique-constraints-in-dynamodb/ - the single table design together with transactions would be a possible solution for you if you can allow the additional partition keys in your table. Any other solution may be challenging under your current schema. DynamoDB has its own way of doing things and it may be frustrating to try to push to do things which it is not designed for.
what is confusing me in the documentation is whether the condition check for attribute_exists or attribute_not_exists is based on the presence of the attribute itself or the value in the attribute of the item.
I would say that: "Use the following functions to determine whether an attribute exists in an item, or to evaluate the value of an attribute." really means the existence of it. But it is not that hard to try.
| common-pile/stackexchange_filtered |
Align individual Div at the bottom of bootstrap col 1
So I was trying make Related box stick at the bottom and aligns to the height of the next bootstrap div which is the business card section
The sort/filter divs should stay on top as they are sticky divs. They stick while scrolling.
The structure looks like this:
<div class="container">
<div class="row">
<div class="col-xs-12">
<div id="filter-panels">
</div>
<div id="related-services">
</div>
</div>
<div class="col-xs-12">
<div id="business-cards">
</div>
</div>
</div>
</div>
The #related-services div must be positioned at the bottom of the grid... and must be aligned to the last business card, and the #filter-panels div must stay at the top because it's a sticky div.
I tried by adding the following to both .col-xs-12 divs...
display: inline-block;
vertical-align: bottom;
float: none
the #related-services div did stay at the bottom, but the #filter-panels div stayed at the bottom as well which is not what I expected...
This can be solved using flexbox.
The trick is to give display: flex to both the container and the two columns. Then you can align the two components of the left-hand column by using flex-direction: column in conjunction with justify-content: space-between:
.container {
display: flex;
}
.left,
.right {
display: flex;
width: 50%;
}
.left {
flex-direction: column;
justify-content: space-between;
}
#filter-panels,
#related-services {
border: 1px solid red;
height: 50px;
width: 100%;
}
#business-cards {
border: 1px solid black;
height: 200px;
width: 100%;
}
<div class="container">
<div class="left">
<div id="filter-panels">Filter panels</div>
<div id="related-services">Related services</div>
</div>
<div class="right">
<div id="business-cards">Business cards</div>
</div>
</div>
Hope this helps! :)
| common-pile/stackexchange_filtered |
Existence of a sub-module such that a certain isomorphism holds
Let $R$ be a noetherian ring and $M$ be a finitely generated $R$ - Module.
Prove that there exists $n\in \mathbb N$ such that there is a finitely generated $R$ sub-module $N\subset R^n$ such that
$$R^n / N \cong M$$
Any ideas ? Maybe use the isomorphism theorem
?
Thanks, I will correct that typo
Let $u_1,...,u_n$ generators of $M$, defined $f:R^n\rightarrow M$ by $f(e_i)=u_i$ where $e_1,...,e_n$ is the canonical basis of $R^n$; ($e_1=(1,0,...,0), e_2=(0,1,..),...$) and denote by $N$ the kernel of $f$.
| common-pile/stackexchange_filtered |
"is normally not meant to ..." or "in not normally meant to ..."?
Where to put the adverb normally in the following sentence.
That is not normally meant to be offensive.
or
That is normally not meant to be offensive.
Both are acceptable, with the first being, in my opinion, the more common form.
The adverb "normally" can modify the past participle of the verb (meant) or it can modify another adverb (not), so it's acceptable to put it in either place. The meanings are mostly interchangeable here and will be understood in exactly the same way in most contexts.
There's possibly a very tiny difference in the nuance of the meaning. I would probably use the second one in a context like the following:
I don't think that person meant to offend you, because that is normally not meant to be offensive.
I am emphasizing that it is normal for it to not mean whatever you took it to mean. In other words, the negation is what is normal.
On the other hand, I might be more likely to use the first one in a context like the following:
Although that is not normally meant to be offensive, I think that person may have intended to offend you this time.
I am emphasizing a contrast between its normal meaning and what it may have meant this time. In other words, I'm negating its normal meaning.
Such differences are very minor, though, and either syntax will be understood in either context. To my ear, "not normally meant" sounds more common.
Pretty much applies to usually as well.
| common-pile/stackexchange_filtered |
How can I solve this Problem with bidirectional dependencies in Objective-C classes?
Okay, this might be a very silly beginner question, but:
I've got an ClassA, which will create an child object from ClassB and assign that to an instance variable. In detail: ClassA will alloc and init ClassB in the designated initializer, and assign that to the childObject instance variable. It's header looks like this:
#import <Foundation/Foundation.h>
#import "ClassB.h"
@interface ClassA : NSObject {
ClassB *childObject;
}
@end
Then, there is the header of ClassB. ClassB has to have a reference to ClassA.
#import <Foundation/Foundation.h>
#import "ClassA.h"
@interface ClassB : NSObject {
ClassA *parentObject;
}
- (id)initWithClassA:(ClassA*)newParentObject;
@end
When the ClassA object creates an child object from ClassB, then the ClassA object will call the designated initializer of the ClassB object, where it has to pass itself (self).
I feel that there is something wrong, but for now I don't get exactly what it is. One think I know is that this does not work. They can't bove import eachother. The compiler just tells me: "error: syntax error before 'ClassA'". When I remove the import statement for importing ClassA, remove the ClassA *parentObject instance variable and remove the designates initializer (that takes ClassA reference), then it works.
Now, here is what I want to achieve (if this matters): I need some very special and complex behavior in an UIScrollView. So I decided to subclass it. Because the main part is going to be done inside the delegate methods, I decided to create an "generic" delegate object for my subclass of UIScrollView. My UIScrollView subclass then creates automatically that special delegate object and assigns it to its delegate property. The delegate object itself needs a reference to it's parent, the customized UIScrollView, so that it has access to the subviews of that scroll view. But anyways, even if there's a better way to get this problem done, I'd still like to know how I could do it that way with two "interdependent" objects that need eachother.
Any suggestions are welcome!
Reminds me of http://stackoverflow.com/questions/820808/objective-c-cocoa-proper-design-for-delegates-and-controllers
Excellent, excellent question. I've just found this now and it answers a long standing issue I've had. Thanks for all the great answers, too!
You don't need to import those classes in the header files. Use @class in the header files, and then use #import only in the .m files:
// ClassA.h:
@class ClassB;
@interface ClassA : NSObject
{
ClassB *childObject;
}
@end
// ClassA.m:
#import "ClassA.h"
#import "ClassB.h"
@implementation ClassA
//..
@end
// ClassB.h
@class ClassA;
@interface ClassB : NSObject
{
ClassA *parentObject;
}
- (id)initWithClassA:(ClassA*)newParentObject;
@end
// ClassB.m:
#import "ClassB.h"
#import "ClassA.h"
@implementation ClassB
//..
@end
What you need to do is to "forward declare" your classes instead of importing the full definition. For example:
#import <Foundation/Foundation.h>
@class ClassB; // Tell the compiler that ClassB exists
@interface ClassA : NSObject {
ClassB *childObject;
}
@end
And you do something similar for the ClassB header file.
Use @class A; and @class B; instead of #imports to tell the compiler to not worry about class definition, and just keep going because it will come across them later.
Basically, replace #import "A.h" with @class A;.
| common-pile/stackexchange_filtered |
Save temporary PDF files in a USB flash from an external app
Here's a function save_document(), that polls the pressing of a Extract PDF button in an external program and saves a tmp PDF file in a USB drive.
import time
import os
import shutil
def save_document():
print("\n\nPreparing to save the document...", end = "\r")
TEMP_PDF_DIR = "C:/Users/TUN/tp3/workspace/tmp" # temporary pdfs are stored in this directory
while not os.listdir(TEMP_PDF_DIR): # wait until the "Extract PDF" button is pressed and a file will be created in directory TEMP_PDF_DIR
time.sleep(0.5)
continue
file_path = "C:/Users/TUN/tp3/workspace/tmp" + "/" + os.listdir(TEMP_PDF_DIR)[0]
shutil.move(file_path, find_drive_id()) # move the file in the USB drive directory
print("Document PDF was saved successfully.")
And the find_drive() function, which looks for a storage device with a given serial number and returns its path (path letter like D:, F:, etc.)
import wmi
def find_drive_id():
local_machine_connection = wmi.WMI()
DRIVE_SERIAL_NUMBER = "88809AB5" # serial number is like an identifier for a storage device determined by system
'''.Win32.LogicalDisk returns list of wmi_object objects, which include information
about all storage devices and discs (C:, D:)'''
for storage_device in local_machine_connection.Win32_LogicalDisk():
if storage_device.VolumeSerialNumber == DRIVE_SERIAL_NUMBER:
return storage_device.DeviceID
return False
Is there a better way to save a just created temporary PDF file?
Also, the polling loop is bothering me...I believe, there's a more elegant approach.
General comments
Personally the polling seems fine, a standard way of doing this is using watchdog. I'll just point out some bits and bobs I find odd about your code.
imports
A common way in Python is to split your imports into three sections: builtin modules, community modules and local. So this
import time
import os
import shutil
Should really be this
import time
import os
import shutil
Because time, os and shutil all live in the standard library.
descriptive function names
Naming this is hard, but it is very important to think carefully about what you name things; especially functions, classes and modules. The main reason why
save_document(): and find_drive_id()
Are bad names is because they do not do what they say they do. Save document does not save a document, it moves an already saved document to a different folder. Similarly find_drive_id does not find a drive ID but returns the path to a drive.
hardcoded paths
This is a 2 for 1 deal. First paths are better handled using the pathlib module (again in the standard library). In addition we are using this path multiple times, so it ought to be extracted into it's own global constant
import pathlib
DIRECTORY_TO_WATCH = pathlib.PureWindowsPath("c:/Users/TUN/tp3/workspace/tmp")
Similarly DRIVE_SERIAL_NUMBER = "88809AB5" should probably be a global constant as well.
docstrings
Triple quotes are usually reserved for docstrings. In addition this
'''.Win32.LogicalDisk returns list of wmi_object objects, which include information
about all storage devices and discs (C:, D:)'''
is more of a comment than a quote, and can be converted to #. In addition you should add docstrings explaining what each function does.
for else
This is really nitpicky, but we can write
for storage_device in local_machine_connection.Win32_LogicalDisk():
if storage_device.VolumeSerialNumber == DRIVE_SERIAL_NUMBER:
return storage_device.DeviceID
else:
return False
Which I find clearer to read. If we do not break or return in the for loop the else clause is triggered. You can also think of the else as then.
if __name__ == "__main__":
Put the parts of your code that are the ones calling for execution behind a if __name__ == "__main__": guard. This way you can import this python module from other places if you ever want to, and the guard prevents the main code from accidentally running on every import.
Example code
Here is a mock-up of how a watchdog implementation could look. Note that I am on Linux and have no chance to check the details if everything is implemented correctly. However, it ought to be a good starting point =)
import time
import pathlib
import shutil
import wmi
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
DIRECTORY_TO_WATCH = pathlib.PureWindowsPath("c:/Users/TUN/tp3/workspace/tmp")
# serial number is like an identifier for a storage device determined by system
DRIVE_SERIAL_NUMBER = "88809AB5"
SLEEP_TIME = 5
class Watcher:
DIRECTORY_TO_WATCH = DIRECTORY_TO_WATCH
SLEEP_TIME = SLEEP_TIME
def __init__(self):
self.observer = Observer()
def run(self):
event_handler = Handler()
self.observer.schedule(event_handler, self.DIRECTORY_TO_WATCH, recursive=True)
self.observer.start()
try:
while True:
time.sleep(self.SLEEP_TIME)
except:
self.observer.stop()
print("Error")
self.observer.join()
class Handler(FileSystemEventHandler):
@staticmethod
def on_any_event(event):
if event.is_directory:
return None
elif event.event_type == "created":
# Take any action here when a file is first created.
# move the file in the USB drive directory
file_path = pathlib.PureWindowsPath(event.src_path)
shutil.move(file_path, get_usb_path())
print("Document PDF was saved successfully.")
def get_usb_path():
local_machine_connection = wmi.WMI()
# Win32.LogicalDisk returns list of wmi_object objects, which include
# information about all storage devices and discs (C:, D:)
for storage_device in local_machine_connection.Win32_LogicalDisk():
if storage_device.VolumeSerialNumber == DRIVE_SERIAL_NUMBER:
return storage_device.DeviceID
else:
return False
if __name__ == "__main__":
w = Watcher()
w.run()
| common-pile/stackexchange_filtered |
Best practice for Native calls from C#
I was wondering what are the best practice/design for calling external dependencies from my C# application? My application is distriuted as DLL that is used in other application.
I have a class named OCRObject that i don't know if I should make it static or not.
This is my code that calls the external DLL:
/// <summary>
/// A static instance of OCRObject that handles the OCR part of the application. This class
/// calls a native libary and the required files must therfore be present in /Tesseract folder.
/// </summary>
internal class OCRObject
{
/// <summary>
/// Calls the Native C++ libary and returns an UTF-8 string of the image text.
/// </summary>
/// <param name="imagePath"> The full image path.</param>
/// <param name="tessConfPath">The tesseract configuration path.</param>
/// <param name="tessLanguage">The tesseract language.</param>
/// <returns></returns>
[HandleProcessCorruptedStateExceptions]
public string GetOCRText(string imagePath, string tessConfPath, string tessLanguage)
{
try
{
if (StaticObjectHolder.EnableAdvancedLogging)
{
Logger.Log(string.Format("Doing OCR on folder {0}.", imagePath));
}
return this.StringFromNativeUtf8(OCRObject.GetUTF8Text(tessConfPath, tessLanguage, imagePath));
}
catch (AccessViolationException ave)
{
Logger.Log(ave.ToString(), LogInformationType.Error);
}
catch (Exception ex)
{
Logger.Log(ex.ToString(), LogInformationType.Error);
}
return string.Empty;
}
/// <summary>
/// The DLL Import declaration. The main entry point is GetUTF8Text which is the method in
/// the native libary. This method extracts text from the image and returns and UTF-8 representation of the string.
/// </summary>
/// <param name="path"> The path of the configuration files.</param>
/// <param name="lang"> The language to parse. For example DAN, ENG etc.</param>
/// <param name="imgPath">The full path of the image to extract image from.</param>
/// <returns></returns>
[HandleProcessCorruptedStateExceptions]
<EMAIL_ADDRESS>EntryPoint = "GetUTF8Text", CallingConvention = CallingConvention.Cdecl)]
private static extern IntPtr GetUTF8Text(string path, string lang, string imgPath);
/// <summary>
/// Converts the returned IntPtr from the native call to a UTF-8 based string.
/// </summary>
/// <param name="nativeUtf8">The native UTF8.</param>
/// <returns></returns>
[HandleProcessCorruptedStateExceptions]
private string StringFromNativeUtf8(IntPtr nativeUtf8)
{
try
{
int len = 0;
if (nativeUtf8 == IntPtr.Zero)
{
return string.Empty;
}
while (Marshal.ReadByte(nativeUtf8, len) != 0)
{
++len;
}
byte[] buffer = new byte[len];
Marshal.Copy(nativeUtf8, buffer, 0, buffer.Length);
string text = Encoding.UTF8.GetString(buffer);
nativeUtf8 = IntPtr.Zero; /*set to zero.*/
return text;
}
catch
{
return string.Empty;
}
}
}
I aiming for maximum performance so i was wondering if this code can be optimized by either making this class static or chaning any of the code?
Here is the C++ Code:
#include "stdafx.h"
#include "OCRWrapper.h"
#include "allheaders.h"
#include "baseapi.h"
#include "iostream"
#include "fstream";
#include "vector";
#include "algorithm"
#include "sys/types.h"
#include "sstream"
OCRWrapper::OCRWrapper()
{
}
//OCRWrapper::~OCRWrapper()
//{
//}
/// <summary>
/// Sets the image path to read text from.
/// </summary>
/// <param name="imgPath">The img path.</param>
/// <summary>
/// Get the text from the image in UTF-8. Remeber to Convert it to UTF-8 again on the callee side.
/// </summary>
/// <returns></returns>
char* OCRWrapper::GetUTF8Text(char* path, char* lang, char* imgPath)
{
char* imageText = NULL;
try
{
tesseract::TessBaseAPI *api = new tesseract::TessBaseAPI();
if (api->Init(path, lang)) {
fprintf(stderr, "Could not initialize tesseract. Incorrect datapath or incorrect lanauge\n"); /*This should throw an error to the caller*/
exit(1);
}
/*Open a reference to the imagepath*/
Pix *image = pixRead(imgPath);
/*Read the image object;*/
api->SetImage(image);
// Get OCR result
imageText = api->GetUTF8Text();
/*writeToFile(outText);*/
/*printf("OCR output:\n%s", imageText);*/
/*Destroy the text*/
api->End();
pixDestroy(&image);
/*std::string x = std::string(imageText);*/
return imageText;
}
catch (...)
{
std::string errorStr("An error occured during OCR. ImgPath => " + std::string(imgPath));
return &errorStr[0];
}
}
OPtimal performance? Use C++/CLR for interface classes. The difference is small but may be relevant. It is a lot larger if you can avoid string generation - with C# interop strings MUST be marshalled, with C++/CLR you may reuse cached strings. Depends on the lower level API you have downstream.
In terms of OCR, though, I seriously think you bark the wrong tree. OCR is a processor intensive operation, so all you optimize on the calls - few and far between compared to the processing - is just not relevant. The times I am going to optimize this stuff is for example with exchange data streams which may be called hundreds of thousands of times per second - with minimal data forwarding it to processing in C#. But for OCR I have serious problems seeing this as relevant. Especially if and as you do not handle the images to start with - and that is the only way it would make sense to consider optimizations.
How long does a call to GetOCRText take? If it is significantly more than 1/1000th of a second - then seriously you DO try to optimize the wrong element. Call overhead is SMALL (much much much smaller than that).
It takes around 6 seconds for 12 images (Mult-Thread). For a single thread it takes around 22 seconds for 12 images. So This is a huge difference. The OCR code is pretty simple (I have updated my question have look please). I don't I can do much more???
if it takes 6 seconds for 12 images - that is 12 calls to the method. The overhead is likely less than 12 microseconds. There is likely a lot you can do - possibly - but nothing of that has to do with the native call, it all would have to be done on the native side (i.e. making the OCR code faster). And your C++ code is not "simple". It is merely a wrapper around the REAL ocr code - your code has not a single line that does OCR. It takes a long time to process an image, so no, that is not simple. I would likely do the C++ into C++/CLR and make a class directly usable from C# - but for style.
Thanks for your input. How do I do c++ into C++/CLR and make a class directly usable from C#?
You start reading the documentation. Seriously, this is too broad for here.
| common-pile/stackexchange_filtered |
Radial Random Walk
I'm trying to generate a spherical distribution of radial random walk points in 3D space. The following code works, but the random walk lines aren't radial. Why ? Where is my mistake ?
MinSprite := 0.006; (* min radius of sprites *)
MaxSprite := 0.03; (* max radius of sprites *)
SpriteOverlap := 0.75; (* min separation between sprites *)
IterationStep := 0.1;
NumberOfSteps := 20;
thickness = 0.09;
pointsmean = 20;
pointssd = 12;
SpriteSize[p_] := MinSprite + (MaxSprite - MinSprite)Norm[p];
SeedRandom[];
RandomWalk = Flatten[Table[{x,y,z}={dist Sqrt[1 - cosinus^2]Cos[phi],dist Sqrt[1 - cosinus^2]Sin[phi],dist cosinus};
{u,v, w}={0.0, 0.0, 0.0};
dist = RandomReal[{5,10}];
phi = RandomReal[{0,2Pi}];
cosinus = RandomReal[{-1,1}];
velocity = Abs[RandomReal[NormalDistribution[0,s]]];
Line[NestList[(
u+=velocity Sqrt[1 - cosinus^2]Cos[phi];
v+=velocity Sqrt[1 - cosinus^2]Sin[phi];
w+=velocity cosinus;
#+IterationStep{u,v, w})&,{x,y, z},NumberOfSteps]],{s,0.25,0.75,0.007}][[All,1]],1];
CloudsParticles = Flatten[Table[(#+RandomReal@LaplaceDistribution[0,thickness])&/@#,{Max[1,IntegerPart@RandomReal@NormalDistribution[pointsmean,pointssd]]}]&/@RandomWalk, 1];
max=Max[Norm/@CloudsParticles];
NormalizedParticles = CloudsParticles/max;
MinSeparation[p_] := SpriteOverlap SpriteSize[p];
KeepPoint[{p_,q_}] := Norm[p]<Norm[q]||Norm[p-q]>MinSeparation[p];
FilterOnce[pts_] := With[{nf=Nearest[pts]},Select[pts, KeepPoint[nf[#,2]]&]];
PointsCoords = FixedPoint[FilterOnce,NormalizedParticles];
ListPointPlot3D[PointsCoords,BoxRatios->{1,1,1},ImageSize->800,SphericalRegion->True,PlotStyle->{Blue,PointSize[Small]}]
Here's a sample of the output. As you can see, this isn't a radial distribution :
The mistake most probably lies in the RandomWalk declaration, but I can't see it. Anyone has an idea of what may be wrong ?
Take note that I'm using Mathematica 7.0 only.
EDIT :
I must admit that this method isn't a clever way of defining a random distribution of points around radial lines. I'll have to do it differently.
Could you please specify what is a "radial" random walk?
I mean a motion on a radial line only, so : steps forward, step back, etc, but toward (or away) the origin of coordinates.
Put the line
{x,y,z}={dist Sqrt[1 - cosinus^2]Cos[phi],dist Sqrt[1 - cosinus^2]Sin[phi],dist cosinus};
Behind
velocity = Abs[RandomReal[NormalDistribution[0,s]]];
and the other expressions that set your variables, rather than before it. With
MinSprite := 0.006; (* min radius of sprites *)
MaxSprite := 0.03; (* max radius of sprites *)
SpriteOverlap := 0.75; (* min separation between sprites *)
IterationStep := 0.1;
NumberOfSteps := 20;
thickness = 0.09;
pointsmean = 20;
pointssd = 12;
You will see that setting
randomLines =
Table[
{u, v, w} = {0.0, 0.0, 0.0};
dist = RandomReal[{5, 10}];
phi = RandomReal[{0, 2 Pi}];
cosinus = RandomReal[{-1, 1}];
velocity = Abs[RandomReal[NormalDistribution[0, s]]];
{x, y, z} =
dist { Sqrt[1 - cosinus^2] Cos[phi], Sqrt[1 - cosinus^2] Sin[phi],
cosinus};
Line[
NestList[
(u += velocity Sqrt[1 - cosinus^2] Cos[phi];
v += velocity Sqrt[1 - cosinus^2] Sin[phi];
w += velocity cosinus;
# + IterationStep {u, v, w}) &,
{x, y, z},
NumberOfSteps
]
]
,
{s, 0.25, 0.75, 0.007}
]
and then doing
Graphics3D@randomLines
yields a picture with radial random lines.
Remark
Note that
randomLines =
Table[
{u, v, w} = {0.0, 0.0, 0.0};
dist = RandomReal[{5, 10}];
velocity = Abs[RandomReal[NormalDistribution[0, s]]];
{x, y, z} = dist RandomReal[{-1, 1}, 3];
Line[
NestList[
({u, v, w} = {u, v, w} + velocity {x, y, z};
# + IterationStep {u, v, w}) &,
{x, y, z},
NumberOfSteps
]
]
,
{s, 0.25, 0.75, 0.007}
];
also creates some radial random lines. Just as a side remark.
Thanks a lot for the answer ! The first suggestion solved my issue. But I don't understand the rest of the answer ; I don't see any difference with my code and the last part of your answer.
@Cham haha yeah it must have been some copy paste error :). It should be ok now.
Ok, everything appears to be fine. Thanks again for your help. It's very appreciated !
@Cham note that all of this can be made a lot faster if you want. It is probably also good to note that the definitions probably yield strange random directions and that it is probably "better" to generate random angles and then apply a Cos or Sin to it. Note that in your code you make the Lines and then throw away the heads Line after that, which is a bit pointless :).
I'm not sure to understand. About the random angle, I used cosinus as a random variable between -1 and 1 to be sure to have an uniform distribution on a sphere. And yes, my code is a bit slow. What would you suggest ? Notice that the lines should have a thickness (random points around the radial lines).
@Cham are you sure that yields uniformly distributed points? Ah http://mathworld.wolfram.com/SpherePointPicking.html seems to agree with you :)
your remark doesn't compile on my system. I'm getting this error message : Nearest::neard: The default distance function does not give a real numeric distance when applied to the point pair
let us continue this discussion in chat
| common-pile/stackexchange_filtered |
NetSuite web serices in C#, setting approvalstatus
We've got a project going where we would like to be able to set a Purchase Order to being approved through web services. We cannot find how to do this and every PO created through our web service operations go in pending approval.
supervisorApproval doesn't seem to do it
and setting orderStatus does not appear to function.
Any ideas?
These two lines should do it:
po.supervisorApproval = true;
po.supervisorApprovalSpecified = true;
or
po.orderStatus = PurchaseOrderOrderStatus._pendingBilling; (or whatever status you want it to go to)
po.orderStatusSpecified = true;
Make sure you have the second line listed. If both are there check the integration log and see if your supervisor approval is being sent to NetSuite. Null values will not appear on the request.
| common-pile/stackexchange_filtered |
Add Text to a TextField Inside A MovieClip AS3
I have been working on a project in Adobe Flash Pro CS5 and I am trying to add text to a textbox inside of a movieclip. I then want to add this movie clip to a scrollpane. I have this:
The instance names are
scrollpane = scroller
movieclip = achievements
textbox = progress1 (I need to do this for 10 different text boxes all in the same movieclip)
import flash.text.TextField
achievements.progress1.text = "16";
scroller.source = achievements
When I run this I get the Error 1119: Access of possibly undefined property progress1 through a reference with static type Class.
I made the movieclip on the stage and exported it for actionscript. I added the text boxes to this and game them all instance names. I Don't know what is wrong and really need some help. Thanks!
you apparently named your class "achievements" since as3 is saying the "progress1" property doesn't exist on the class itself. Of course you will want to size and move the components as you see fit, but here is a basic idea for the class and it's usage:
package {
public class Achievements extends MovieClip {
public var progress1:TextField = new TextField();
public var progress2:TextField = new TextField();
public var progress3:TextField = new TextField();
public function Achievements(){
addChild(progress1);
addChild(progress2);
addChild(progress3);
}
}
}
//Then in your main code:
var achievements:Achievements = new Achievements();
addChild(achievements);
//Then to set the text
achievements.progress1.text = "it's alive!!!";
Thanks if I have to do this for lots of text boxes how would I do that. their instance names would be progress1, progress2, progress3 and so on. Also, does this work with a scrollpane?
edited the example to show you how to add more text field instances to it. And yes they will work with ScrollPane
Thank you very much if I already have the textboxes inside the movieclip can I do it with out adding the textboxes as children or should I just take the boxes out of the movie clip and add them from as3
my personal preference is to do as much with code as possible (all :))
| common-pile/stackexchange_filtered |
Why does this function take up a lot of memory overtime
I'm trying to understand why does this small function after a reaches around almost 200,000 under about:memory tab it says that the devtools's total memory is: 1079408k? can someone explain why?
var a = 0;
(function loop(){
a++;
console.count();
call = setTimeout(loop);
})()
Because it's infinite recursion.
Could it be because there's 200k setTimeouts?
Because you never stop calling loop?
these are all bad things.
I know that I never stop but, why does it reach that much memory. I can stop it by calling clearTimeout(call). But I'm trying to understand if I'm not filling up the callstack by using setTimeout and I'm only using 1 variable a. Unless it's the call variable?
There's also a lot of lines in the console, no ?
true? could it be that only? meaning let's say I don't call console.count would there be a point where the memory doesn't grow or say stays around the same range?
even when I clear the console and check about:memory page again it's still around 500,000
@Edwin and without dev tools ?
the page's memory itself does not grow but under about:memory there's a devtool page which's memory is growing
and now I have to wait again till a == 200000 because I had two fingers on my trackpad and ended up closing my browser
@Edwin You don't have to wait, just look at the task manager while it runs.
@dystroy huh? i'm saying so the memory can go back up again
anybody else got anything?
@Todd can you elaborate a bit more on what's bad and what's good?
I think that was my point initially -- that I couldn't add too much further than what had been covered in the first few comments. Interesting question! Even though infinite recursion is crystal to most of us, to try and articulate the process and its dangers (for someone who may not have it so clear) is an interesting challenge.
I've been working on some code that goes through about a million or more combinations. and I was wondering why it's slowing down and taking up memory so I just wrote that small code and tested if even this takes up lots of memory. Now I'm trying to find out why
There was speculation in comments but nobody checked, so I did it :
When you remove the console.count(), the memory stops growing. What you saw was just the console growing : those lines must be stored somewhere.
didn't even get notified about this answer. But I also mentioned that even when I cleared the console, for some reason the memory was still up to 500,000k
Ok I see that, the console.count() takes up a lot of memory, also I've tried changing the first line from var a = 0; to var a = 0, call; I believe that had something to do with it as well thanks
The function itself continues on infinitely in a loop.
call = setTimeout(loop);
Just calls the function again, which calls that line again. There is no return statement, so the recursion never stops and it loops on infinitely.
As pointed out in the comments, it isn't necessarily recursive since there is no stack building up. The memory is building up because as dystroy pointed out
console.count();
causes the console to count the amount of times that function is called, and since it is being called infinitely, the memory is quickly filled with thousands of lines console.count() output.
It's not really a recursion. There's no growing stack per se, unless it's added by some dev tools.
But it is recursion in a sense, since setTimeout calls the function which then calls setTimeout that then calls the function, etc.
And why would that consume memory ?
setTimeout defer the call, it's not executed in the same 'scope'
the variable call has to be kept track of in addition to a
@Wold—they're both global, and setTimeout is just passed a reference.
It's always the same variable, changing its value 200000 times doesn't make the memory grow.
that's what I'm saying changing the variable value shouldn't grow the memory
| common-pile/stackexchange_filtered |
C exec, awk, not working
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#define MAXLINE 512
main(int argc,char* argv[]){
int k;
for (k=0; k<=argc; k++) {
if (k%2==0) {
if (fork()==0){
FILE *fi;
FILE *fo;
int i;
fi=fopen(argv[k], "r");
fo=fopen("temp.txt","w");
if (!fi)
return;
char linie[MAXLINE],*p;
for ( ; ; ) {
p = fgets(linie, MAXLINE, fi);
if (p == NULL)
break;
linie[MAXLINE-1] = '\0';
int k=-1;
for (i = 0; i <MAXLINE; i++) {
if (linie[i]=='\n') k=i;
}
for (i = k; i >= 0; --i) {
fprintf(fo,"%c", linie[i]);
}
}
fclose(fi);
fclose(fo);
exit(1);}
}
else
{
if (fork()==0){
execl("/usr/bin/awk","awk","-f","ouk.awk",argv[k],NULL);
exit(1);
}
}
}
};
here is the ouk.awk file content
{ for (i=NF;i>=1;i--){ if(s){s=s" "$i} else{s=$i }}{print s;s=""}}
Basically what i try to do is create a number of argc processes and is the number is even to mirror the text in the file and if not to rearrange the words from every line backwards, the problem I'm facing is
fprintf(fo,"%c",line[j])
is not working and I also get an error when I try to execute the awk script
awk: can't open file >
input record number 6, file >
source line number 1
if I run only the awk command in terminal with the same files it works perfectly so it must have something to do with the execl command.
One more thing, I've tried the following command to rename the temp.txt int argv[k]
execl("bin/mv","temp.txt",argv[k],NULL)
but it crashes.
If anyone could help me or give me a link to a good exec c command tutorial it would be fantastic, thanks a lot
The for loop is going beyond the bounds of the array (which is undefined behaviour):
for (k=0; k<=argc; k++) {
/* ...snip... */
fi=fopen(argv[k], "r");
as arrays have zero based index, running from 0 to N-1 where N is the number of elements in the array. The terminating condition of the for must be k < argc. Additionally, the first element in argv is the name of the program which you will want to exclude:
for (k = 1; k < argc; k++)
When invoking execl() you need to cast the last argument to a char*:
execl("/usr/bin/awk","awk","-f","ouk.awk",argv[k], (char*)NULL);
@JackRobinson, it might work but from the linked reference page: The list of arguments must be terminated by a NULL pointer, and, since these are variadic functions, this pointer must be cast (char *) NULL.
| common-pile/stackexchange_filtered |
How do I upload using AWS S3 PRESIGNED URL as HTML?
Currently, S3 PRESIGNED URL is being obtained through SPRING BOOT.
I want to upload an image from FRONT by passing the URL in HTML.
However, the CODE you have written now receives SignatureDoesNotMatch.
MY SERVER CODE
public String getSignedURL(String fileName){
ZonedDateTime uploadTime=ZonedDateTime.now(ZoneId.of("Asia/Seoul"));
Date expiration=new Date();
long expTimeMillis =expiration.getTime();
expTimeMillis+=1000*60*1; //1Min
expiration.setTime(expTimeMillis);
//+uploadTime.format(DateTimeFormatter.ofPattern("yy.mm.dd HH:mm:ss z"))
String objectKey="boardimages/"+fileName;
URL urls=null;
try {
GeneratePresignedUrlRequest url = new GeneratePresignedUrlRequest(bucketName, objectKey)
.withMethod(HttpMethod.POST)
.withExpiration(expiration);
url.addRequestParameter(Headers.S3_CANNED_ACL, CannedAccessControlList.PublicRead.toString());
urls = s3Client.generatePresignedUrl(url);
}catch(Exception e){
e.printStackTrace();
}
System.out.println("pre-signed url : "+urls);
return urls.toString();
}
My controller
@RequestMapping(value="/upload",method = RequestMethod.POST)
public String uploadFile(Model model,HttpServletRequest request){
Map<String,String[]> emp=request.getParameterMap();
String fileName=emp.get("file")[0];
System.out.println("uploading file Name = "+fileName);
String persgined=awsService.getSignedURL(fileName);
model.addAttribute("fileName","boardimages/"+fileName); //object key -> .jpg boardimage->s3 bucket dir
model.addAttribute("url",persgined);
model.addAttribute("accesskey",awsService.getAccessKey());
return "galleryUrlUpload";
}
my html form code
<form role="form" action="#" th:action="${url}" method="post" enctype="multipart/form-data">
<!--<input type="hidden" name="_method" value="PUT"/>-->
<input type="hidden" name="key" value="${fileName}"/>
<input type="hidden" name="AWSAccessKeyId" value="${accesskey}" />
<input type="file" name="file">
<input type="submit" value="click"></input>
My Error
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your key and signing method.
</Message>
<AWSAccessKeyId>---</AWSAccessKeyId>
<StringToSign>
AWS4-HMAC-SHA256 20210310T081246Z 20210310/ap-northeast-2/s3/aws4_request 38d7447c7cb20368bb9a690afc14029b6983a423b7400f214d4acb119ab181b6
</StringToSign>
<SignatureProvided>
fe2cb0da2a770b64014d971c270731d70788519447f64f63dfc9cd881d0c23c1
</SignatureProvided>
<StringToSignBytes>
41 57 53 34 2d 48 4d 41 43 2d 53 48 41 32 35 36 0a 32 30 32 31 30 33 31 30 54 30 38 31 32 34 36 5a 0a 32 30 32 31 30 33 31 30 2f 61 70 2d 6e 6f 72 74 68 65 61 73 74 2d 32 2f 73 33 2f 61 77 73 34 5f 72 65 71 75 65 73 74 0a 33 38 64 37 34 34 37 63 37 63 62 32 30 33 36 38 62 62 39 61 36 39 30 61 66 63 31 34 30 32 39 62 36 39 38 33 61 34 32 33 62 37 34 30 30 66 32 31 34 64 34 61 63 62 31 31 39 61 62 31 38 31 62 36
</StringToSignBytes>
<CanonicalRequest>
POST /boardimages/1540.jpg X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAV4JZWYET2PT6UO6G%2F20210310%2Fap-northeast-2%2Fs3%2Faws4_request&X-Amz-Date=20210310T081246Z&X-Amz-Expires=59&X-Amz-SignedHeaders=host&x-amz-acl=public-read host:infostarbinary.s3.ap-northeast-2.amazonaws.com host UNSIGNED-PAYLOAD
</CanonicalRequest>
<CanonicalRequestBytes>
50 4f 53 54 0a 2f 62 6f 61 72 64 69 6d 61 67 65 73 2f 31 35 34 30 2e 6a 70 67 0a 58 2d 41 6d 7a 2d 41 6c 67 6f 72 69 74 68 6d 3d 41 57 53 34 2d 48 4d 41 43 2d 53 48 41 32 35 36 26 58 2d 41 6d 7a 2d 43 72 65 64 65 6e 74 69 61 6c 3d 41 4b 49 41 56 34 4a 5a 57 59 45 54 32 50 54 36 55 4f 36 47 25 32 46 32 30 32 31 30 33 31 30 25 32 46 61 70 2d 6e 6f 72 74 68 65 61 73 74 2d 32 25 32 46 73 33 25 32 46 61 77 73 34 5f 72 65 71 75 65 73 74 26 58 2d 41 6d 7a 2d 44 61 74 65 3d 32 30 32 31 30 33 31 30 54 30 38 31 32 34 36 5a 26 58 2d 41 6d 7a 2d 45 78 70 69 72 65 73 3d 35 39 26 58 2d 41 6d 7a 2d 53 69 67 6e 65 64 48 65 61 64 65 72 73 3d 68 6f 73 74 26 78 2d 61 6d 7a 2d 61 63 6c 3d 70 75 62 6c 69 63 2d 72 65 61 64 0a 68 6f 73 74 3a 69 6e 66 6f 73 74 61 72 62 69 6e 61 72 79 2e 73 33 2e 61 70 2d 6e 6f 72 74 68 65 61 73 74 2d 32 2e 61 6d 61 7a 6f 6e 61 77 73 2e 63 6f 6d 0a 0a 68 6f 73 74 0a 55 4e 53 49 47 4e 45 44 2d 50 41 59 4c 4f 41 44
</CanonicalRequestBytes>
<RequestId>RG1VPG1549WMPKZ9</RequestId>
<HostId>
XMnFDE1xzfvkwsLsJPVuxUEbLRlH5muADarR/p5KCTp3U9N0f6uI1CR8WL+rWTPbI0V4kZ7LpzA=
</HostId>
</Error>
I referenced here https://boto3.amazonaws.com/v1/documentation/api/1.11.4/guide/s3-presigned-urls.html but didn't understand the policy and signature of the last html.
What do I need to pass as policy and signature in order to work?
Or please tell me to upload s3 via presigned url using html.
If you are using Java to perform this use case, please refer to the Java documentation as opposed to Python. See:
Working with Amazon S3 Presigned URLs
The logic to upload content via a presigned URL using the AWS SDK for Java V2 can be found in that topic.
If you are not familiar with using Java V2, please refer to this quick start.
UPDATE:
If you want to upload content from a web page (as your comment suggests), then you should use the AWS SDK for Javascript - not the Java API. Here is the example to refer to:
https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/s3-example-creating-buckets.html#s3-create-presigendurl-put
I looked at this https://docs.aws.amazon.com/en_kr/AmazonS3/latest/dev/PresignedUrlUploadObjectJavaSDK.html page and implemented that feature.
However, the uploaded example and the example in the link above are the way to upload files from the java server.
I want to upload an image and I want to upload it from the front, not the server.
I tried the example page you uploaded, but the same error appears...
| common-pile/stackexchange_filtered |
replacingOccurrences(of: " ", with: "") Not Working
I have a text field where a user enters their phone number. When they hit the 'continue' button, the text in the text field is assigned to a global variable called phoneNumber. I then proceed to clean the string of any non-integer values such as '-', ')', and '+'. When I try to delete spaces it doesn't always work. I realized it only doesn't work when I autofill my phone number. Are spaces in autofill phone numbers different than spaces from our keyboards?
Can someone help me figure out what's happening here or if there's a better way to do this?
class LoginViewController: UIViewController {
@IBOutlet weak var phoneNumberTextField: UITextField!
var phoneNumber = ""
@IBAction func continueButton(_ sender: Any) {
self.phoneNumber = phoneNumberTextField.text!
for _ in 0...phoneNumber.count {
self.phoneNumber = self.phoneNumber.replacingOccurrences(of: " ", with: "")
self.phoneNumber = self.phoneNumber.replacingOccurrences(of: "-", with: "")
self.phoneNumber = self.phoneNumber.replacingOccurrences(of: "(", with: "")
self.phoneNumber = self.phoneNumber.replacingOccurrences(of: ")", with: "")
}
checkCount(phoneNumber: self.phoneNumber)
}
func checkCount(phoneNumber : String) {
if phoneNumber.count == 11 {
self.phoneNumber = "+" + phoneNumber
}
else if phoneNumber.count == 10 {
self.phoneNumber = "+1" + phoneNumber
}
}
}
Possible duplicate of Bug in replacingOccurrences()?
I tried that it didn't work.
The reason that your code does not work is probably that the phone number contain “non-breaking space characters”, compare Why Strings are not equal in my case?.
Thanks @MartinR that's probably what's happening.
Best way to filter the phone numbers from the string using CharacterSet instead of replacingOccurrences.
You can try using following code
let components =
phoneNumber.components(separatedBy: CharacterSet.decimalDigits.inverted)
let phone = components.joined()
print(phone)
That worked perfectly! You're awesome. One little thing I changed, just so I would have one less variable: I changed 'let phone' to 'self.phoneNumber'. That way I didn't have to change the rest of my code. Thanks!
| common-pile/stackexchange_filtered |
Question is on hold
My question was put on hold because it is unclear. I thought it could happen because I have little familiarity with the subject, probably use the wrong terminology etc. Could you tell me what is unclear? Perhaps I was using the wrong forum?
https://dba.stackexchange.com/questions/81128/using-cloud-oracle-server-in-net-application-asp-net-web-site
I need to use a website build using Microsoft technology to connect to an Oracle database that would reside in a cloud (Amazon, Azure, ....). I could not find how to connect them (I presume the connection line I would use for the physical server won't work for the cloud-based server). I do not know how to make it clearer, maybe you could tell me what I am missing.
Thanks in advance!
http://www.connectionstrings.com/oracle/
I think your question is too chatty and vague - if you ask a single, specific question you may find it is better received.
| common-pile/stackexchange_filtered |
Sculpt brush isn't drawing much detail
I already subdivided a lot from the multiresolution modifier, and when I draw with the sculpt brush it doesn't sculpt details. What am I doing wrong?
You need to either enable Dynotopo or subdivide the mesh with the modifier again. Does that do what you want?
What exactly is happening? What happens when you try drawing with the brush? Do other brushes work? What do you mean by "detail"? Do you mean that it still looks like there's lots of faces like the ones visible in your screenshot, or do you mean that the geometry isn't moving much? What happens when you increase your subdivisions further with the multiresolution modifier?
As you can read on the upper right corner of your window, you,ve got 29.641 vertex, it's not enough for fine details. Try subdividing until you reach 1.000.000.
In object mode switch to "Smooth Shading".
If the computer gets slow, subdivide you model in different objects (P key in edit mode mode), and sculpt them one by one, using the multiresolution modifier to swithc between different levels of details.
| common-pile/stackexchange_filtered |
Is there a rigid dense linear order?
Does there exist a dense linear order with at least two points that is rigid, in other words, has no nontrivial automorphisms?
Ooh, I like this question.
In this old paper I showed inter alia that $[0,1]$ has dense subspaces $X$ of cardinality $2^\omega$ with the property that $X\setminus\{x\}$ and $X\setminus\{y\}$ are not homeomorphic whenever $x,y\in X$ and $x\ne y$. Such an $X$ must be a densely ordered subset of $[0,1]$, so its subspace and order topologies are identical, and therefore it cannot have a non-trivial order-automorphism. (In fact there are $2^\omega$ of them that are pairwise disjoint and pairwise non-homeomorphic.)
Very nifty, I didn't know this result!
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.