text
stringlengths
70
452k
dataset
stringclasses
2 values
An error ouucrred while I calling the write function of the QTcpSocket class from a QThread I am learning about Multi-threaded programming in Qt, and then while I'm calling the write function of the QTcpSocket class from a QThread.The function output is: QObject: Cannot create children for a parent that is in a different thread. (Parent is QNativeSocketEngine(0x6f1840), parent's thread is QThread(0x624e90), current thread is QThread(0x716ed0) The code that I called the write function: QString Processor::GetSystemInfoOfClient() const { QString result; const char *send_buffer = "GSIOC"; char receive_buffer[100]; this->cli_sock->write(send_buffer); this->cli_sock->waitForBytesWritten(); this->cli_sock->waitForReadyRead(100000); this->cli_sock->read(receive_buffer, 100); result = QString(receive_buffer); return result; } The code that I accept the connections: void Processor::Accept() { this->cli_sock = m_server->nextPendingConnection(); this->cli_addr = this->cli_sock->peerAddress(); this->cli_port = this->cli_sock->peerPort(); } The definition of the connection processor: class Processor : public QObject { Q_OBJECT public: explicit Processor(QObject *parent = 0, QTcpServer *server = nullptr); private: QTcpServer *m_server = nullptr; QTcpSocket *cli_sock = nullptr; QHostAddress cli_addr; quint16 cli_port; QString GetSystemInfoOfClient() const; signals: void GetDetailedFinished(const QString &address,const QString &port, const QString &SystemInfo); public slots: void GetClientDetail(); void Accept(); }; Then I move the processor to the QThread Processor *processor = new Processor(0, server); processor->moveToThread(&processor_thread); connect(&processor_thread, &QThread::finished, processor, &QObject::deleteLater); connect(this, &Processor_Controller::Accept, processor, &Processor::Accept); connect(this, &Processor_Controller::GetClientDetail, processor, &Processor::GetClientDetail); connect(processor, &Processor::GetDetailedFinished, this, &Processor_Controller::PassClientDetail); processor_thread.start(); And I create the QTcpServer in the main thread. // The address of the server QHostAddress srv_addr("<IP_ADDRESS>"); // The server listen port quint16 srv_port = 9895; Processor_Controller *controller = new Processor_Controller(0, &server); connect(&server, &QTcpServer::newConnection, controller, &Processor_Controller::BeginProcess); connect(controller, &Processor_Controller::DisplayClientDetail, this, &MainWindow::DisplayClientDetail); // BeginListen if(!server.listen(srv_addr, srv_port)) emit statusBar()->showMessage("Listen Failed"); else emit statusBar()->showMessage("Listen Success"); The function that call to the GetSystemInfoOfClient function: void Processor::GetClientDetail() { QString address = this->cli_addr.toString(); QString port = QString::number(this->cli_port); QString SystemInfo = this->GetSystemInfoOfClient(); emit this->GetDetailedFinished(address, port, SystemInfo); } The constructor of the Processor: Processor::Processor(QObject *parent, QTcpServer *server) : QObject(parent) { this->m_server = server; } I just want to create a server in main thread and process the connections in the Processor class. How I can avoid this error without changing this thinking? when you initialize cli_sock variable try to set enclosing Processor object as it's parent I hava tried to do that but it didn't make much difference more ideas to play around: 1) Create cli_addr on the heap, not on the stack 2) Show us code for Processor constructor. This error is about thread affinity, i.e. you create some QObject in one thread and using in another where is the call to GetSystemInfoOfClient? The Processor constructor has only a line code: Processor::Processor(QObject *parent, QTcpServer *server) : QObject(parent) { this->m_server = server; } Still, where's your cli_sock initialization code? You shouldn't set parent via QObject::SetParent but use QTcpSocket constructor instead. @Alexey Andronov I try to do that but it also cannot solve any problem @Alexey Andronov I try to initialization cli_sock but its error output is also like that after I use QObject::SetParent it outputs I solve this problem just using these two line of codes in Processor::Accept function QTcpSocket *m_socket = m_server->nextPendingConnection(); this->cli_sock->setSocketDescriptor(m_socket->socketDescriptor(), m_socket->state(), m_socket->openMode()); please take a look at Qt Threaded Fortune Server Example and the way they implement a multi-threaded server and follow their pattern. see the note at the end of setSocketDescriptor's documentation, It is not possible to initialize two abstract sockets with the same native socket descriptor.. In your code you are using the same socket descriptor in cli_sock and m_socket. you will have problems soon or later. . . I will see the Example, and the m_socket just like a Temporary object , I will use it for nothing.
common-pile/stackexchange_filtered
Naming differences between project dependencies and :require When I look at something like, say, the clojure.data.json source code I can see a namespace looking, for example, like this: (ns clojure.data.json...) So when I want to :require that in my .clj Clojure files, I simply do something like this: (ns so.example (:require [clojure.data.json :as json]) ... However in the dependencies in my .clj I have: :dependencies [[org.clojure/data.json "0.2.4"] So the clojure.data.json "became" org.clojure/data.json. Now for, say, server.socket I have in my dependencies: [server-socket "1.0.0"] So this time no ".org" added, no slash, but the dot became a dash. What's the relation between :require in Clojure source files and :dependencies in project.clj? Is there any "logic"? How can I find what's the correct line to put in the dependencies? The dependency vectors in project.clj are the maven artifact coordinates to resolve the dependency by finding the appropriate jar. Leiningen will attempt to find the apropriate jars and add them to your classpath so that namespace definitions and other resources can be loaded from inside their archive contents at runtime. The require statement in your code specifies a resource to look for in the class path. For example if you require clojure.data.json, Clojure will look for a resource with the path clojure/data/json.clj somewhere in your classpath, and attempt to load the definition for the namespace clojure.data.json from that resource. There is no relationship. Namespace is something defined in the source code file. A dependency is based on a project name and is decided by the author(s) when they publish it. You'll almost always find the proper dependency information on the project github site or at Clojars, or in some cases, maven.
common-pile/stackexchange_filtered
IQueryable model does not contain definition for GetAwaiter and no accessible extension GetAwaiter accept first argument of type IIncludableQueryable I am getting the error above when I try to do a query to the database. This was working fine in .NET 5, but as soon as I created a new project in .NET 6 and using exactly the same query, I'm getting the error - what could I be doing wrong? This is my code: public async Task<IQueryable<Employee>> FindAllEmployees() { try { IQueryable<Employee> employeeModel = await _employee.Context.employee .AsNoTracking() .Include(a => a.Policies).ThenInclude(a => a.Product).ThenInclude(a => a.Underwriter) .Include(a => a.PrincipalMember); //error appears here return employeeModel; } catch (Exception ex) { _logService.LogError(ex.Message); throw; } } I tried adding ToListAsync() and it sorted the compiler errors out but I then get an error at runtime Unable to cast object of type 'Collections.Generic.List1[Microsoft.EntityFrameworkCore.Query.IIncludableQueryable2 to type Generic.ICollection IQueryable<Employee> employeeModel = (IQueryable<Employee>)await _employeeContext.Employee.AsNoTracking() .Include(a => a.Policies).ThenInclude(a => a.Product).ThenInclude(a => a.Underwriter) .Include(a => a.PrincipalMember).ToListAsync(); Why you are trying to return IQueryable? After materialization, there is no need to work with IQueryable. Do you want to return the query, or execute it? It looks like you are trying to do both. You should return IEnumerable after materialization. IQueryable is not longer needed and may affect performance. public async Task<IEnumerable<Employee>> FindAllEmployees() { try { var employeeModel = await _employee.Context.employee .AsNoTracking() .Include(a => a.Policies).ThenInclude(a => a.Product).ThenInclude(a => a.Underwriter) .Include(a => a.PrincipalMember) .ToListAsync(); return employeeModel; } catch (Exception ex) { _logService.LogError(ex.Message); throw; } } But if you still need IQueryable to do not break interface, you can call return employeeModel.AsQueryable(); But note that this IQueryable will be created from IEnumerable and will work in-memory with materialized objects. Just on one last note the name of the function should be FindAllEmployeesAsync() as per convention here: https://learn.microsoft.com/en-us/dotnet/csharp/asynchronous-programming/async-scenarios For a start, an async method should contain something that is awaited. Building a query is executing nothing. You would need to await something like ToListAsync() and return Task<IEnumerable<Employee>> If this method is part of a repository, the other option is to just IQueryable. The method itself does not need to be asynchronous, it can be consumed by both synchronous and asynchronous calls: public IQueryable<Employee> FindAllEmployees() { IQueryable<Employee> query = _employee.Context.employee; return query; } This is a basic wrapper to abstract the DbContext, but a more common use case would be where you have universal low-level rules you expect enforced such as a soft-delete system or multi-tenant authorization check for queries. For example a soft-delete system you might want to default to only look for active records but have the option to see inactive ones: public IQueryable<Employee> FindAllEmployees(bool includeInactive = false) { IQueryable<Employee> query = _employee.Context.employee; if(!includeInactive) query = query.Where(x => x.IsActive); return query; } Note that we don't have the eager loading statements. The reason for this is that this is a consumption concern. Callers can decide how they want to consume the IQueryable, such as to project the data to a view model where Include statements are not necessary, or apply additional filtering through Where clauses etc. For querying large sets of data like "all employees" I recommend using projection through Select rather than dealing with entities and eager loading their entire set of related entities. That is an operation that is better suited to cases where you have narrowed it down to a single or small set of top-level entities. For example to consume FindAllEmployees IQueryable from an async controller action that wants to return view models: public async Task<IEnumerable<EmployeeSummaryViewModel>> GetEmployeesAsync(SearchCriteria criteria) { var query = _repository.FindAllEmployees(); // Example search criteria. if (!string.IsNullOrEmpty(criteria.FirstName)) query = query.Where(x => x.FirstName.StartsWith(criteria.FirstName); var employees = await query.Select(x => new EmployeeSummaryViewModel { // populate details from employee and related... }).ToListAsync(); return employees; } The method can serve synchronous calls as well, by calling ToList() etc. against the IQueryable. Since the consumer is executing the query this is also where you would want to place your exception handling. The advantage of returning IQueryable is that your repository can remain a rather "thin" without concerns about how the data might be consumed. The consumers have the flexibility to manage things like eager loading, projection, filtering, pagination, sorting, etc. This is a pattern I promote for facilitating unit testing around business logic in an EF-based project. Note that when mocking the repository exposing IQueryable you cannot merely substitute returned sets of data using new List<T>(...).AsQueryable() as this does not work for asynchronous operations. There is a Nuget package called MockQueryable which addresses that, with implementations for Moq, NSubstitute, and probably other mocking frameworks. It provides new List<T>(...).BuildMock() to return an IQueryable that works with async.
common-pile/stackexchange_filtered
Extension to the page icon status in SDL Trdidion 2009 We are using SDL Tridion 2009 SP1. We have implemented a new functionality, an extension in our CMS which allowed us to lock a page. If a page it is locked it cannot longer be published ( the information of a page that is locked is kept in a database which was created for this extension). We want to add a new icon which will notify the user on the new status of the page. Now there are 4 combination of icons ( no action , checked , published , checked and published ) Since I do not have a long experience with the CMS interface I want some help on finding a solution that have no impact on performance and that it easy to implement in terms of not doing of lot of modification. Below is my investigation regarding this: I noticed that the way the icons are render in the cms is not a simple mechanism that can be easy updated. Each time we click on an item in the left side of the CMS, in order to render the list from the right side a ajax call (with an xml request) is done to the WebGUIResponder.aspx. page. The response we will get back is a xml that contain the attribute field Icon <tcm:ListItems xmlns:tcm="http://www.tridion.com/ContentManager/5.0" ID="tcm:yyy-zzzz-4" Managed="68" ItemType="4"> <tcm:Item ID="tcm:yyy-zzzzz-64" Type="64" Title="NotificationTest" Modified="2011-05-09T09:42:27" FromPub="400 YYYY Website Master (EN-GB)" IsNew="false" Icon="T64L0P1"/> </tcm:ListItems> Based on this field Icon attribute (Icon="T64L0P1) the image name starts to be processed. T64 = means it is a page L0 = is not checked P1 = it is already published For such a field the image name result will be = T64.16x16.List.Published.gif I couldn't find a way to update this field through the page xml, is not an information that is kept in the xml but rather is build in the dll when the xml request . (Somewhere based on other fields like published and something else this Icon field is calculated.) So if it is not possible to modify this field the option we may have is: In order to integrate our change in the CMS without modifying their .dll (this for compatibility with the new version of the SDL Trdion is not good to modify in the dlll) and without changing too much the logic I was thinking to this approach. We can make a new Ajax call to a a new page WebGUICheckPageLocked.aspx (need to be tested what will be the impact on the performance). In the code behind of this page we can determine if the page is locked or not ( used our internal function that determine if the page is locked or not this functionality is already done). In the page we will change the icon field to something T64L0P1E01 (adding some extra information which will allowed us to determine the new status of the page ). We will also modify the In the GetPNGIconName javascript function we can then make an extra check taking in consideration the new information E01 ...) Please if someone have some better idea on this, maybe it is something easy that can be done, maybe it is a way we can update the Icon field. Kind Regards, Cristina Congrats on your first Tridion post on SO Cristina - You might also want to consider committing to the SDL Tridion proposal on Area 51 at http://area51.stackexchange.com/proposals/38335/tridion?referrer=eo63snjNlUWNn9xqeeO2NA2 Hi Chris, Thank you very much for your help with all this. I never add any question on any forum until now even if I'm not new in programing. I really appreciate your help with this. I'll paste my answer from the forums here, so everyone can see (and maybe bring ideas on how to do it differently?)... In 2011 I would use a Data Extender to change the icon. Since this is 2009 you will need to use the less elegant predecessor: the GUI Responder Extension. Essentially you need to manipulate the XML that is returned for the relevant requests (such as the GetList on a Folder). I couldn't immediately find any documentation on this - which is not surprising as it is an older version. But it boils down to this: Create a .NET assembly containing a class with the following method signature and attribute: [ResponseMessageHandler] public XmlDocument HandleMessage(XmlDocument messageXml, string userName, HttpContext httpContext, object tcmSession) In that method, you can change the icon set in the XML based on your own logic. In the extension configuration file, add a section to hook into the response for the lists you care about (substitute "YourResponderExtension.dll" with the name of the assembly you added): <ProcessResponse> <!-- GetList --> <ExecuteWhen>/tcmapi:Message/tcmapi:Response/tcmapi:Request/tcmapi:GetList</ExecuteWhen> <!-- Handler for all of the above --> <Execute>/bin/YourResponderExtension.dll</Execute> </ProcessResponse> Add more elements before the if applicable - and make the XPath query as specific as you can to avoid your extension being called unnecessarily. You might also need to check for more cases in the .NET code that you can't do with the XPath query. ZIP up your extension and deploy it with TcmExtensionInstaller.exe. From your text I'm assuming you've already worked out how to create and package an extension in 2009. I hope that these smalls steps can get you started. If you have any trouble or follow-up questions, just let me know and I'll see if I can answer them.
common-pile/stackexchange_filtered
Functions called when showing and hiding uisearchcontroller What functions are called that I can use/override when a UISearchController's search bar is tapped and when the Cancel is tapped? I want to do things (like adjust table view offset) when the search bar and scope bar options are shown and then hidden. You can use UISearchBarDelegate functions. Set your view controller as a delegate of searchController.searchBar and implement needed functions. The docs is here.
common-pile/stackexchange_filtered
How to get arguments of an objective-c method while executing it I create a simple example trying to get the first argument of the method, if it works I plan to extend it to any number of arguments: + (void)simpleTest:(NSInteger)teamId { unsigned int argumentCount = method_getNumberOfArguments(class_getClassMethod([self class], _cmd)); for (unsigned int i = 2; i < argumentCount; i++) { const char *argumentType = method_copyArgumentType(class_getClassMethod([self class], _cmd), i); if (strcmp(argumentType, @encode(NSInteger)) == 0) { NSInteger argumentValue = *((NSInteger *)(((uintptr_t)self) + sizeof(void *) * i)); NSLog(@"Argument %d: %ld", i - 1, (long)argumentValue); } free((void *)argumentType); } } I call it like this: [self simpleTest:22]; the log print: Argument 1:<PHONE_NUMBER>41760 I think the problem happens at the piece of code where I want to get 'argumentValue', but I don`t know how to fix it. self is a pointer to your class containing function +simpleTest. Why should the argument to a function be located at a certain offset behind self? Its probably located on the stack or in a register depending on the ABI to call a function... After thinking more about it, I think you are confusing self -- which is a pointer to your class -- with the location of the first implicit argument. I.e. you probably want something like ((uintptr_t)&self) + sizeof .... However, even if the implicit argument is located on the stack, the compiler may return the location of a local copy of the argument rather than the actual argument location. What you're looking for is varargs. I'm duping this to the question that answers that. Please comment here if you believe there is more to your question and we can reopen this one. The code you're writing here relies on a particular parameter passing approach that is neither correct (see mschmidt's discussion of the ABI), nor reliable if it happened to be correct. But it's luckily also unnecessary. @Rob Napier I want to get the arguments of existing method dynamically which is not created with varargs. @mschmidt I think I only care about the value of the argument that passed in Each platform has its own convention for passing arguments, and it depends on the number and types of the arguments. If you're working with aarch64, start with the ABI, taking into account Apple's differences. You'll probably want to skip down to section 6.8 to get started. A major reason for varargs to provide a consistent way to do this. Note many things are passed in registers, not the stack. If you can update your question to cover the specific architecture(s) you're looking for, and what kinds of arguments (will it always be series of 64-bit integers, or could it be arbitrary things?), it would be reasonable to reopen this. It tends to be a bit tedious and fiddly, so I don't know if you'll get an answer, but it would be a different question. Some sense of why you're doing this might help us get you onto an easier path. @RobNapier Generally speaking, I want to save the method name and the arguments of any method, then I could 'reproduce' the method at any time with NSInvocation instance created with the saved method name and arguments I'm not 100% what you're trying to build here, but I strongly suspect what you really want is an NSProxy-based surrogate object that implements forwardInvocation:. This will create the NSInvocation for you, and you can then pass that along to the underlying object and also save it. For more details, see https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/ObjCRuntimeGuide/Articles/ocrtForwarding.html#//apple_ref/doc/uid/TP40008048-CH105 And don't forget about NSUndoManager, which is possibly relevant to problems in this space. For some simple implementations of this, see the trampolines in https://github.com/iosptl/ios7ptl/tree/master/ch24-DeepObjC/ObserverTrampoline.
common-pile/stackexchange_filtered
WinForm: How to implement button with correct background colors I'm developing winform app with C#. And I created custom button inherent from UserControl as shown below: public partial class UserButton : UserControl { public UserButton(string UserID) { this.Size = new Size(32, 50); this.BackColor = Color.Transparent; } protected override void OnPaint(PaintEventArgs e) { Graphics g = e.Graphics; g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic; Img = WaseelaMonitoring.Properties.Resources.T; g.DrawImage(Img, 0, 0, this.Size.Width, this.Size.Height); } } Note: this is button png image (Click here) Now, I want to show some buttons on picture box using this code: UserButton TagButton1 = new UserButton("Button1"); TagButton1.Location = Points[0]; UserButton TagButton2 = new UserButton("Button2"); TagButton2.Location = Points[1]; UserButton TagButton3 = new UserButton("Button3"); TagButton1.Location = Points[2]; Picturebox1.Controls.Add(TagButton1); Picturebox1.Controls.Add(TagButton2); Picturebox1.Controls.Add(TagButton2); Picturebox1.Invalidate(); Okay, when show only one button on the picture box, the background button is transparent(as I want) like this: But if I want to show two or more buttons beside together the background button is white not transparent like this: I'm using invalidate picture box and trying invalidate button also, but is not solve that problem. how about using TransparencyKey for your user control? something like: TransparencyKey = Color.White (from inside the user control itself) WinForms does not support true Z-ordering of components; windowed controls (such as Button and UserControl) cannot have true alpha-channel support, and the this.Background - Color.Transparent trick is actually a special-case where the control will re-paint its parent's Background image or color to itself first. If you are after a more flexible user-experience, I suggest switching to WPF, or doing all of your painting within a single WinForms Control. I solved this problem by added this line to initializing constructor: SetStyle(ControlStyles.Opaque, true); And overridden this function: protected override CreateParams CreateParams { get { const int WS_EX_TRANSPARENT = 0x00000020; CreateParams cp = base.CreateParams; cp.ExStyle |= WS_EX_TRANSPARENT; return cp; } }
common-pile/stackexchange_filtered
Error 400: invalid_request - Custom URI scheme is not supported on Chrome apps I'm trying to run Oauth in my Chrome extension, but when I try to login, here's the error I get. Here is what I've verified, the id of my extension, and the id of my OAuth 2.0 Client IDs is the same. Here's my function, that runs: const signInWithGoogle = () => { // chrome.runtime.sendMessage({ message: 'google_sign_in' }); console.log('test'); chrome.identity.getAuthToken({ interactive: true }, function (token) { if (chrome.runtime.lastError) { console.error(chrome.runtime.lastError); } else { console.log(token); } }); }; And here is my manifest: "oauth2": { "client_id": "xxx", "scopes": ["https://www.googleapis.com/auth/userinfo.email"] }, "permissions": [ "downloads", "storage", "activeTab", "contextMenus", "identity" ], There's a Github issue on the same problem as well. https://github.com/GoogleChrome/developer.chrome.com/issues/7434 Even if this doesn't work, I'd love if someone can point to an alternative. If you follow the OAuth 2.0: authenticate users with Google tutorial, you will be able to make it work, but, you must configure some extra steps on your extension. You have to add a scope in the manifest file. { // ... "oauth2": { "client_id": "<your_extension_id>.apps.googleusercontent.com", "scopes": ["https://www.googleapis.com/auth/userinfo.email"] // <-- here }, "key": "<your_public_key>" } You have to use Google Chrome browser. It doesn't work on Brave or Chromium as you will see: 2.1 On Brave browser it didn't work because there is a bad request (I guess when Brave calls the Google API, it sends some headers into the request which Google API verifies, like user agent, it will only accept Google Chrome) 2.2 On Google Chrome it worked You were right. I was trying on Arc browser and it was failing there with: Error 400: invalid_request Custom URI scheme is not supported on Chrome apps. I tried on Google Chrome and it worked!! I wonder why though! @KushagraGour I recently saw an article by Brave Browser talking about this: https://github.com/brave/brave-browser/wiki/Allow-Google-login---Third-Parties-and-Extensions
common-pile/stackexchange_filtered
Why volatile and MemoryBarrier do not prevent operations reordering? If I understand meaning of volatile and MemoryBarrier correctly than the program below has never to be able to show any result. It catches reordering of write operations every time I run it. It does not matter if I run it in Debug or Release. It also does not matter if I run it as 32bit or 64bit application. Why does it happen? using System; using System.Threading; using System.Threading.Tasks; namespace FlipFlop { class Program { //Declaring these variables as volatile should instruct compiler to //flush all caches from registers into the memory. static volatile int a; static volatile int b; //Track a number of iteration that it took to detect operation reordering. static long iterations = 0; static object locker = new object(); //Indicates that operation reordering is not found yet. static volatile bool continueTrying = true; //Indicates that Check method should continue. static volatile bool continueChecking = true; static void Main(string[] args) { //Restarting test until able to catch reordering. while (continueTrying) { iterations++; var checker = new Task(Check); var writter = new Task(Write); lock (locker) { continueChecking = true; checker.Start(); } writter.Start(); checker.Wait(); writter.Wait(); } Console.ReadKey(); } static void Write() { //Writing is locked until Main will start Check() method. lock (locker) { //Using memory barrier should prevent opration reordering. a = 1; Thread.MemoryBarrier(); b = 10; Thread.MemoryBarrier(); b = 20; Thread.MemoryBarrier(); a = 2; //Stops spinning in the Check method. continueChecking = false; } } static void Check() { //Spins until finds operation reordering or stopped by Write method. while (continueChecking) { int tempA = a; int tempB = b; if (tempB == 10 && tempA == 2) { continueTrying = false; Console.WriteLine("Caught when a = {0} and b = {1}", tempA, tempB); Console.WriteLine("In " + iterations + " iterations."); break; } } } } } b/c this is their very idea. Memory barriers (write) just make sure all operation up to the moment are flushed, hence the following ones are ordered part the barrier. the most interesting thing in your code is that removing all the Thread.MemoryBarrier(); lines fixes your problem =) @Mikant: No, that does not fix the problem. It just makes it very very unlikely. Let it run for a few days and it still might happen. I don't think this is re-ordering. This piece of code is simply not thread-safe: while (continueChecking) { int tempA = a; int tempB = b; ... I think this scenario is possible: int tempA = a; executes with the values of the last loop (a == 2) There is a context switch to the Write thread b = 10 and the loop stops There is a context switch to the Check thread int tempB = b; executes with b == 10 I notice that the calls to MemoryBarrier() enhance the chances of this scenario. Probably because they cause more context-switching. Did you doubt yourself and delete? @Marc: I had the right answer/insight but on 2nd reading I got confused by a typo of my own :}. You aren't cleaning the variables between tests, so (for all but the first) initially a is 2 and b is 20 - before Write has done anything. Check can get that initial value of a (so tempA is 2), and then Write can get in, get as far as changing b to 10. Now Check reads the b (so tempB is 10). Et voila. No re-order necessary to repro. Reset a and b to 0 between runs and I expect it will go away. edit: confirmed; "as is" I get the issue almost immediately (<2000 iterations); but by adding: while (continueTrying) { a = b = 0; // reset <======= added this it then loops for any amount of time without any issue. Or as a flow: Write A= B= Check (except first run) 2 20 int tempA = a; a = 1; 1 20 Thread.MemoryBarrier(); b = 10; 1 10 int tempB = b; @Dennis no problem; I think he deleted then undeleted, so it wasn't there. Accepting his is the correct thing to do The result has nothing to do with reordering, with memory barries, or with volatile. All these constructs are needed to avoid effects of compiler or CPU reordering of the instructions. But this program would produce the same result even assuming fully consistent single-CPU memory model and no compiler optimization. First of all, notice that there will be multiple Write() tasks started in parallel. They are running sequentially due to lock() inside Write(), but a signle Check() method can read a and b produced by different instances of Write() tasks. Because Check() function has no synchronization with Write function - it can read a and b at two arbitrary and different moments. There is nothing in your code that prevents Check() from reading a produced by previous Write() at one moment and then reading b produced by following Write() at another moment. First of all you need synchronization (lock) in Check() and then you might (but probably not in this case) need memory barriers and volatile to fight with memory model problems. This is all you need: int tempA, tempB; lock (locker) { tempA = a; tempB = b; } It looks a bit more interesting than that. If there is no re-ordering, what scenario would gives those values for tempA/tempB? Note there is only one Write per test (the lock is just meant to delay access so the Write doesn't happen too soon; as it happens, it doesn't necessarily do this, as there may be a delay between Start and the actual start - but it comes close enough, it seems) @Marc - the Checker starts before Writer, so it has a chance to observe all the writes that Writer does. MemoryBarrier in Writer only makes things worse, since it increases the change for Checker to see all intermediate values The OP is not tryig to discuss mutex - it really is focusing on the intended re-ordering prevention. The "lock" suggestion at the end really misses what his is trying to illustrate, IMO. Again, the lock in the question is not intended to provide a mutex; it is just intended to delay Write until the reader has started. @Marc - there are MANY Write() per test. Maybe that is the real cause of confusion :) @Michael - no, one Write, many read loops. By "test" I mean iterations++ "that there will be multiple Write() tasks started in parallel" - this is very wrong. 1 Write / iteration. If you use MemoryBarrier in writer, why don't you do that in checker? Put Thread.MemoryBarrier(); before int tempA = a;. Calling Thread.MemoryBarrier(); so many times blocks all of the advantages of the method. Call it only once before or after a = 1;. This doesn't really explain what's going on. How do these suggestions fix the problem in terms of the .NET memory model? @dtb it was a bit clearer before you edited my post and deleted a line that my post could be a clue to @Dennis... there is nothing mysterious happening in his code. and there are no problems with .NET memory model. everything works as written. so i think Dennis is able to get an answer to the question following my .writings. Maybe Dennis is able to get the answer following your clues, but why don't you simply provide the answer directly for everyone to see?
common-pile/stackexchange_filtered
srcset - Responsive image loading wrong file My goal is to serve different versions (resolutions/sizes) of the same image, which should occupy 100% of the width of the viewport, so I can serve a 800px version to mobile devices (or, generally, devices with smaller displays or slower connections), 1366px and above to larger desktop displays. The problem is that I'm testing it with the Chromium device emulator and some small screen devices load the 1366px version instead of the 800px: the iPhone 6/7/8 (375px width) loads the 800px image, but the iPad (768px), Nexus 5 (360px) and iPhoneX (375px) all load the 1366px instead of loading the 800px. I'm not very confident of having understood the sizes directive properly, here's my code, the default src references the 2880px version just to help testing: <img class="img-fluid" srcset="img/dreamstime_800w_109635242.jpg 800w, img/dreamstime_1366w_109635242.jpg 1366w, img/dreamstime_2880w_109635242.jpg 2880w" sizes="(max-width: 800px) 100vw, (max-width: 1366px) 100vw, 2880px 100vw" src="img/dreamstime_2880w_109635242.jpg"/> This has to do with retina displays (and their DPI, I think). From what I've heard, retina displays will pick the first image that is either twice or three times the width of their display, depending on their respective retina display (2x, 3x etc). Another simple solution would be clearing your browser cache. If your biggest and baddest image has already been cached, Chrome (for example) will always load that image instead. The problem is indeed related to the screen's DPI, this article explains it well: https://css-tricks.com/responsive-images-youre-just-changing-resolutions-use-srcset/ Also make sure you test with responsive mode disabled, because responsive mode may change the pixel density of your device. I made a more detailed answer to explain. Your sizes attribute tells the browser the image is always shown full viewport width, so you could simply replace it with sizes="100vw". So the browser takes current viewport width, multiply it with screen density, and it gives it the width of the required image. It then takes the closest image from the list in the srcset. You can't use it to ”serve a 800px version to mobile devices”, because most mobile devices nowadays have bigger density than desktop devices and you can't prevent it with <img srcset… sizes…>. If you really want to ignore screen density (for what reason?) and: serve smallest images to small devices, serve medium image to medium devices serve large image to large devices keep largest image as the fallback Then you have to use <picture> with media queries like that: html <picture> <source media="(max-width: 800px)" srcset="img/dreamstime_800w_109635242.jpg 800w"> <source media="(max-width: 1366px)" srcset="img/dreamstime_1366w_109635242.jpg 1366w"> <img src="img/dreamstime_2880w_109635242.jpg"> </picture> I understand and agree with your answer generally, but you can achieve the same by simply using srcset: https://css-tricks.com/responsive-images-youre-just-changing-resolutions-use-srcset/ That's not true, unless you remove "so I can serve a 800px version to mobile devices" in your question. It would have been helpful if you added some explanation beyond "that's not true", but I've linked an article sustaining my previous claim if you care to elaborate... Well, sorry, but I thought my first answer was already pretty elaborate. If you use srcset with multiple images, the browser will take the one it needs to fill viewportxdensity, so if you want to restrict a 800px viewport to an image 800px wide, either you don't provide any image above that size, or you use <picture>. If you only use srcset with images larger than 800px and you have a 2ddpx screen with a 800px viewport, the browser will try to find a 1600px image. for future readers - i had a hard time understanding why srcset was totally ignoring my attempts. After a couple hours of frustration and anger i came thru the most obvious answer in the world - i'm working on a Retina Macbook Pro, and i wasn't triggering pixel density. I changed it to <img class="<?= $image_class;?>" data-caption="<?php echo $image['alt']; ?>" data-src="<?php echo $image['sizes']['medium']; ?>" data-srcset="<?php echo $image['sizes']['medium']; ?> 600w 2x, <?php echo $image['sizes']['large']; ?> 1280w 2x" sizes="(min-width: 150px) 600px, 100vw" width="150" height="150" alt="<?php echo $image['alt']; ?>"> and everything worked - at least i could figure out something was happening. Phew. Merry Xmas everybody! Testing my web pages with the Chrome emulator, I sometimes found (like you did) that an image larger than necessary was apparently being loaded, particularly on small mobile devices. I finally realized that this was not an error on my part but Chrome pulling a larger image from cache if it happened to have one there. Clicking the checkbox in the emulator to disable the cache while testing, the problem went away and the sizes I expected to see loaded were in fact loaded. I noticed that when the chrome devTools is open with the responsive mode enabled, my device pixel ratio changed from 1 to 2. As a result, even with cache disabled in network tab, it seemed to me the browser was loading a larger file than necessary, but it's because the pixel ratio changed when: the devTools was open responsive mode was enabled You can double check with a simple javascript alert the pixel ratio at any given time: <script>alert(window.devicePixelRatio)</script> It's likely you have responsive mode enabled if you try to load your page at multiple width for testing, but it's also what might change the pixel density. Thus, it might just work as intended like it was for me. simple answer: sizes="(max-width: 999px) 50vw, 100vw"
common-pile/stackexchange_filtered
Server not picking up information from database and passing it to client I am trying to get my server to get the sso from the logged in user (web) and pass that to an AS3 client. If I set a specific SSO in the client (bellow) the server picks up the user from the database. Currently I get the error: ERROR 1: You have an invalid SSO ticket. Please re-login and then reload. package { import com.archicruise.external.RoomManager; import com.archicruise.server.Connection; import flash.display.Bitmap; import flash.display.BitmapData; import flash.display.LoaderInfo; import flash.display.Sprite; import flash.events.Event; import flash.system.Security; import flash.system.System; public class Main extends Sprite { [Embed(source = '../assets/client_back.png')] private static const clientBackImage:Class; public static var SITE_URL:String = "http://localhost/archicruise/"; public var roomLoader:RoomManager; private var connection:Connection; public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); //Add client background addChild(new clientBackImage() as Bitmap); //Got an SSO ticket? var ssoTicket:String = LoaderInfo(this.root.loaderInfo).parameters["sso"]; if (ssoTicket == "" || ssoTicket == null) ssoTicket = "2e44550b0d6e98cc9f26c39e53213e24"; //Initialize the connection Security.allowDomain("*"); connection = new Connection("localhost", 9339, this, ssoTicket);; } } } I am getting the ssoTicket value after a user logs into a website and launches the page with the SWF like so: var flashvars = { sso: "<?php echo $self['sso_ticket']; ?>" }; The Handler from the server: using System; using System.Collections.Generic; using System.Linq; using System.Text; using ParticleFramework.Communication; using ParticleFramework.Storage; using ParticleFramework; using MySql.Data.MySqlClient; using ArchiCruise.Rooms; namespace ArchiCruise.Users { static class Handler { public static List<UserObject> clientObjects = new List<UserObject>(); public static void login(string ssoTicket, TcpClient client) { if (ssoTicket == "") { client.Disconnect(); return; } Log.Info("Client " + client.index + " logging in with SSO: " + ssoTicket); if (DBManager.database.getString("SELECT COUNT(*) FROM users` WHERE sso_ticket like '%" + ssoTicket.Trim() + "%'") != "0") { DBManager.database.closeClient(); //build the user object UserObject userObject = newObject(ssoTicket, client); foreach (UserObject user in clientObjects) { if (user.username == userObject.username) { user.tcpClient.Disconnect(); } } if (clientObjects.Count <= client.index || clientObjects[client.index] == null) { client.userObject = userObject; clientObjects.Add(userObject); } else { client.userObject = userObject; clientObjects[client.index] = userObject; } client.sendData("LO" + (char)13 + userObject.ToPrivate()); DBManager.database.closeClient(); } else { DBManager.database.closeClient(); client.sendData("ER 1: You have an invalid SSO ticket. Please re-login and then reload."); } } public static void toAll(string Data) { foreach (UserObject user in clientObjects) { user.tcpClient.sendData(Data); } } public static void toAll(string Data, Boolean disconnect) { foreach (UserObject user in clientObjects) { user.tcpClient.sendData(Data); if (disconnect) user.tcpClient.Disconnect(); } } public static void toUser(string Data, string uname) { foreach (UserObject user in clientObjects) { if (user.username.ToLower() == uname.ToLower()) { user.tcpClient.sendData(Data); } } } public static void toUser(string Data, string uname, Boolean disconnect) { foreach (UserObject user in clientObjects) { if (user.username.ToLower() == uname.ToLower()) { user.tcpClient.sendData(Data); if (disconnect) { user.tcpClient.Disconnect(); } } } } public static void toRoom(int roomID, TcpClient client) { if (clientObjects.Count >= client.index && client.userObject.roomID != roomID) { Log.Info("Client " + client.index + " going to public room " + roomID); if (DBManager.database.getString("SELECT COUNT(*) FROM `public` WHERE `id` = '" + roomID + "';") != "0") { DBManager.database.closeClient(); //kick plz if (client.userObject.roomID > 0) { client.userObject.toRoom("KO " + client.userObject.username); } //update user object MySqlDataReader mysqlRead = DBManager.database.getCommand("SELECT * FROM `public` WHERE `id` = '" + roomID + "' LIMIT 1").ExecuteReader(); mysqlRead.Read(); client.userObject.toRoom(roomID, Convert.ToInt32(mysqlRead["startpos"].ToString().Split(',')[0]), Convert.ToInt32(mysqlRead["startpos"].ToString().Split(',')[1])); client.sendData("RO" + mysqlRead["layout"].ToString() + (char)13 + mysqlRead["name"].ToString() + (char)13 + (char)12 + mysqlRead["heightmap"].ToString() + (char)12 + mysqlRead["warps"].ToString()); DBManager.database.closeClient(); } else { DBManager.database.closeClient(); client.sendData("ER 1: You have an invalid SSO ticket. Please re-login and then reload."); } } } public static void moveUser(TcpClient client, int _x, int _y) { client.userObject.x = _x; client.userObject.y = _y; client.userObject.toRoom("MV " + client.userObject.username + " " + _x + " " + _y); } public static void sendNavigationList(TcpClient client, int pub) { string nList = "NV" + (char)13; MySqlDataReader mysqlRead = DBManager.database.getCommand("SELECT * FROM `public` WHERE `show` = 'yes' AND `public` = '" + pub + "'").ExecuteReader(); while (mysqlRead.Read()) { nList += mysqlRead["id"].ToString() + (char)14 + mysqlRead["name"].ToString() + (char)13; } DBManager.database.closeClient(); client.sendData(nList); } public static void sendUserList(TcpClient client) { string userList = "UE" + (char)13; client.userObject.toRoom("UL" + (char)13 + client.userObject.ToString()); foreach (UserObject user in clientObjects) { if (user.roomID == client.userObject.roomID && user.tcpClient != null) { if (user.username != client.userObject.username && !userList.Contains(user.username + "@")) { userList += user.ToString(); } } } client.sendData(userList); //Send room object client.sendData("RB" + (char)13 + RoomObjects.buildObjects(client.userObject.roomID)); } public static UserObject newObject(string ssoTicket, TcpClient tClient) { MySqlDataReader mysqlRead = DBManager.database.getCommand("SELECT * FROM `users` WHERE `sso_ticket` = '" + ssoTicket + "' LIMIT 1").ExecuteReader(); mysqlRead.Read(); return new UserObject(mysqlRead["name"].ToString(), Convert.ToInt32(mysqlRead["rank"]), Convert.ToInt32(mysqlRead["credits"]), tClient); } } } Requested DBManager Class using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; namespace ParticleFramework.Storage { static class DBManager { public static Database database; public static Boolean Initialize(string type, string user, string pass, string host, string dbname) { switch (type) { case "sql": database = new MySQL(); break; default: Log.Error("Invalid database type! (" + type + ")"); break; } if (database != null) { return database.connect(user, pass, dbname, host); } else { return false; } } } } MySQL Class using System; using System.Collections.Generic; using System.Linq; using System.Text; using MySql.Data.MySqlClient; namespace ParticleFramework.Storage { class MySQL : Database { private MySqlConnection connection; public Boolean connect(string username, string password, string database, string host) { try { connection = new MySqlConnection(buildConnectionString(username, password, database, host)); Console.WriteLine("Database connected. Running test query..."); getString("SHOW TABLES FROM `" + database + "`"); Log.Info("Test query succeeded. Database initialized."); closeClient(); return true; } catch (Exception e) { Log.Error("MySQL Connect: " + e.Message); return false; } } public string getString(string query) { try { string resultStr = getCommand(query).ExecuteScalar().ToString(); closeClient(); return resultStr; } catch (Exception e) { Log.Error("MySQL getString: " + e.Message); return ""; } } public MySqlCommand getCommand(string query) { try { if (connection.State != System.Data.ConnectionState.Closed) { connection.Close(); } MySqlCommand command = newCommand(); command.CommandText = query; connection.Open(); return command; } catch (Exception e) { Log.Error("MySQL getCommand: " + e.Message); return null; } } public void noCommand(string query) { try { if (connection.State != System.Data.ConnectionState.Closed) { connection.Close(); } MySqlCommand command = newCommand(); command.CommandText = query; connection.Open(); command.ExecuteNonQuery(); connection.Close(); } catch (Exception e) { Log.Error("MySQL noCommand: " + e.Message); } } public void closeClient() { try { if (connection.State == System.Data.ConnectionState.Open) { connection.Close(); } } catch (Exception e) { Log.Error("MySQL closeClient: " + e.Message); } } public MySqlCommand newCommand() { try { return connection.CreateCommand(); } catch (Exception e) { Log.Error("MySQL newCommand: " + e.Message); return null; } } public string buildConnectionString(string username, string password, string database, string host) { return "Database=" + database + ";Data Source=" + host + ";User Id=" + username + ";Password=" + password; } } } Database Class using System; using System.Collections.Generic; using System.Linq; using System.Text; using MySql.Data.MySqlClient; namespace ParticleFramework.Storage { interface Database { Boolean connect(string username, string password, string database, string host); MySqlCommand newCommand(); MySqlCommand getCommand(string query); string buildConnectionString(string username, string password, string database, string host); string getString(string query); void noCommand(string query); void closeClient(); } } LOG INFO AFTER SSO STRING CHANGE >[1/1/0001 00:00:00] <IP_ADDRESS>connected. Full <IP_ADDRESS>:56765 >[1/1/0001 00:00:00] Got LO null from client 0 >[1/1/0001 00:00:00] Client 0 logging in with SSO: null >[ERROR]Packet handler: MySql.Data.MySqlClient.MySqlException (0x80004005): Invalid attempt to access a field before calling Read() > at MySql.Data.MySqlClient.ResultSet.get_Item(Int32 index) > at MySql.Data.MySqlClient.MySqlDataReader.GetFieldValue(Int32 index, Boolean checkNull) > at MySql.Data.MySqlClient.MySqlDataReader.GetValue(Int32 i) > at MySql.Data.MySqlClient.MySqlDataReader.get_Item(Int32 i) > at MySql.Data.MySqlClient.MySqlDataReader.get_Item(String name) > at ArchiCruise.Users.Handler.newObject(String ssoTicket, TcpClient tClient) in C:\Users\Daniel\Desktop\AC\Particle Server\Particle Server\ArchiCruise\Users\Handler.cs:line 188 > at ArchiCruise.Users.Handler.login(String ssoTicket, TcpClient client) in C:\Users\Daniel\Desktop\AC\Particle Server\Particle Server\ArchiCruise\Users\Handler.cs:line 31 > at ArchiCruise.ArchiCruisePackets.handle(String packet, TcpClient client) in C:\Users\Daniel\Desktop\AC\Particle Server\Particle Server\ArchiCruise\ArchiCruisePackets.cs:line 23 >[1/1/0001 00:00:00] Client0 disconnected and removed. Tcpclient Class using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Net; using System.Net.Sockets; namespace ParticleFramework.Communication { class TcpClient { #region Required Variables public Socket socket; public int index; private byte[] dataBuffer = new byte[0x400]; private AsyncCallback ReceiveCallback; private AsyncCallback SendCallback; #endregion #region ArchiCruise Vars public ArchiCruise.Users.UserObject userObject; public string ip; #endregion public TcpClient(Socket sock, int num) { index = num; socket = sock; ip = socket.RemoteEndPoint.ToString().Split(new char[] { ':' })[0]; ReceiveCallback = new AsyncCallback(this.ReceivedData); SendCallback = new AsyncCallback(this.sentData); this.WaitForData(); } public void Disconnect() { if (socket.Connected) { socket.Close(); if (userObject != null) userObject.remove(); Particle.Server.removeClient(this); Log.Info("Client" + this.index + " disconnected and removed."); Console.WriteLine("Client" + this.index + " disconnected."); } } private void ReceivedData(IAsyncResult iAr) { try { int count = 0; try { count = socket.EndReceive(iAr); } catch { Disconnect(); } StringBuilder builder = new StringBuilder(); builder.Append(System.Text.Encoding.Default.GetString(this.dataBuffer, 0, count)); string str = System.Text.Encoding.Default.GetString(this.dataBuffer, 0, count); if (str.Contains("<policy-file-requet/>")) { Log.Info("Sending policy file to client" + this.index); rawSend("<?xml version\"1.0\"?><cross-domain-policy><allow-access-from-domain=\"*\" to-ports=\"*\" /><cross-domain-policy>" + Convert.ToChar(0)); } else if (!(str.ToString() == "")) { string packet = str.Substring(0, str.Length - 1); //packet = ArchiCruise.Security.Encryption.decrypt(packet); Log.Info("Got " + str + " from client " + this.index); Particle.packetClass.handle(packet, this); } else { Disconnect(); } } catch (Exception exception) { Log.Info("Data recieve error: " + exception.ToString() + " " + exception.Source); Disconnect(); } finally { this.WaitForData(); } } private void WaitForData() { try { socket.BeginReceive(this.dataBuffer, 0, this.dataBuffer.Length, SocketFlags.None, this.ReceiveCallback, socket); } catch { Disconnect(); } } public void sendData(string Data) { Data += (char)1; rawSend(Data); } internal void rawSend(string Data) { try { Data += "\0"; byte[] bytes = System.Text.Encoding.Default.GetBytes(Data); socket.BeginSend(bytes, 0, bytes.Length, SocketFlags.None, new AsyncCallback(this.sentData), null); Log.Info("Sent " + Data + " to client " + this.index); } catch { Disconnect(); } } private void sentData(IAsyncResult iAr) { try { socket.EndSend(iAr); } catch { Disconnect(); } } } } The line Log.Info("Client " + client.index + " logging in with SSO: " + ssoTicket); prints the correct ssoTicket? I will need to double check later but I'm 101% positive it does print correctly The type of DBManager.database is builtin, or a custom class? If it is a custom, can you share, at least the method getString? Do you mean built in? Either way its custom. I'll edit the code in a few hours as not home at this time. @anolsi I have updated the question to include the DBManager class, and conformation here that the logged SSO is the same as what is set in Main.as [1/1/0001 00:00:00] Client 0 logging in with SSO: 73a448e7e4a3314d2d1a3f33588df9b8 Thanks. I want to see that only to be sure that nothing strange was happening there. Now, just to be sure, can you replace the query line by that one: DBManager.database.getString("SELECT COUNT(*) FROM usersWHEREsso_ticket like '%" + ssoTicket.Trim() + "%'"). If that works it should stay on that way (because it will allow that any ssoTicket like a single space to be overpass the security), but will give us an hint if the problem can be there or not. On that case I think I will be able to help you more. After making that change (code edited to show) the server still builds and runs, and accepts the client when SSO is manually set in-client. Removing this line in the original question where SSO is set, no longer causes the ER1 error in the question, states client connected but does nothing else, so log info for that is also included in question Let us continue this discussion in chat. Can you please show me how you are injecting flashvars variable (from javascript I think), on the SWF object in the DOM? Because you need to relate that somewhere... I have got the flashvars done, I don't have up to date code here at work. I know it is injecting as it is correctly logged but it is disconnecting client after Log.Info("Client " + client.index + " logging in with SSO: " + ssoTicket); in the above handler class What I'm thinking is that your php is generating and sending the ssoTicket correctly, but it isn't being correctly loaded into the swf. That makes the swf sending the incorrect ticket to C#. That is way I want to see how you really pass the sssoTicket to the swf object. I will add shortly but when I start server and load SWF if logs the correct SSO @anolsi it appears that the cross domain policy is causing the issue. It is refreshing the connection after it sends the policy (which it is meant to apparently) which is updating the ssoticket. Now I just need to find a way to stop the sso updating when the policy is sent Wait. I'm confuse. You are saying that the line Log.Info("Client " + client.index + " logging in with SSO: " + ssoTicket); logs one time the correct ticket, and other time an incorrect ticket? Or are you saying that the swf receives the correct sso ticket first, and after (because some server response or something) it receives a new sso ticket? When the page is called, it gets the correct ticket from the database and logs it. It is covered here https://www.kirupa.com/forum/showthread.php?363222-TIP-Cross-Domain-Policy-Files-Quick-Fix-for-getting-them-to-work I think The issue is you are using ExecuteScalar to get a result set. You should use MySqlCommand.ExecuteReader Method https://dev.mysql.com/doc/connector-net/en/connector-net-ref-mysqlclient-mysqlcommandmembers.html#connector-net-ref-mysqlclient-mysqlcommand-executereader-overload-1 If you can tailor this answer to show how you would change the issue then I will dig out the code and check as this is nearly a year old (and still unsolved).
common-pile/stackexchange_filtered
How does keras handle convolution in case the size of the output is less than the possible number of extracted features using convolution? All is in the question: Consider the case we specify an output size of 64 but with current convolution layer parameters we can get 128 features. How is this handled in general and using keras in particular?
common-pile/stackexchange_filtered
MongoStat via cron to file? I'm looking to run mongostat command and have that export out to a file... Wondering if its OK to run this via cron giving that its a continuious stream of data does that work OK if I enter the command as a cron? Is there a better way? I want to ensure that if the server gets rebooted that the command starts running automatically hence I don't want to just run it as my user. Thoughts? looks like nohup might be of some use... What is your intended mongostat polling interval? The default output of every second wouldn't be suitable for a cron job which typically has a one minute granularity. In this case, nohup or similar would be more appropriate to start and continue collecting mongostat output in the background. Also, what is your goal in terms of collecting this data? If you're after a continuous stream of MongoDB metrics, an agent based approach would be better, ideally with charts & alerting. For example, you could use MongoDB Cloud or Ops Manager.
common-pile/stackexchange_filtered
c++ binary file add records, search and replace I am working on binary files in turbo c++ 3.5 and i want to create a library program. I want to add information about books in a binary file and do functions such as: Search and replace, delete a record, and etc. I do this functions but i have 2 problems: 1. For example when i add 6 records about books to file, BooksReport function cant show all records and for example just show 4 or 5 records and when i search records, from 5 records, for example i just found 3 or 2 records. 2.When i search and replace a word on file, all records thats before this edited record, will be deleted. #include <conio.h> #include <stdio.h> #include <stdlib.h> #include <string.h> void add(); void search(); struct { char name[20]; char id[2]; char publisher[20];} books, listbooks[100]; void main(){ add(); // search(); getch(); } //Add void add(){ FILE *pt; pt=fopen("books.dat","a"); clrscr(); printf("\t Please Enter Data for new book"); printf("\n Please enter Name:"); scanf("%s", &books.name ); printf("Please enter ID:"); scanf("%s", &books.id ); printf("Please enter Publisher:"); scanf("%s", &books.publisher); fwrite(&books, sizeof(books), 1,pt); fclose(pt); } void search(){ //Search and replace pt=fopen("books.dat","w+"); char replaceName[20]; char searchName[20]; rewind(pt); found=0; printf("Please enter search word \n"); scanf("%s", &searchName); printf("Please enter replace word \n"); scanf("%s", &replaceName); i=0; do{ i++; fread(&books, sizeof(books), i,pt); if(strcmp(searchName,books.name)==0){ found=1; strcpy(books.name,replaceName); fwrite(&books, sizeof(books), i,pt); break;} }while(!feof(pt)); clrscr(); if(found==1){ printf("Replace successful!"); } if(found==0){ printf("Not Found"); } fclose(pt); } Do you want to know why nobody answers? Because you´re using a 22 year old compiler, non-standard code, have no indentation, and... Please help me. i am student and should be use turbo c++. what the problem in my codes? If you want to use proper C++, it´s probably impossible to help you without using a newer software. G++/MinGW (and others) are up to date and free. About your question of the code: a) It looks like you want to make a C program, not C++. b) conio.h c) Incomplete (no main, no function heads...) d)... No! my codes successful running on turbo c++ and just i have 2 bugs that i want to solve its. I am edit top codes and add main() function. please help me. There's no question in this post. You code isn't going to fix itself magically, you know?! After you read a book with fread the file pointer is already pointing to next book, calling just fwrite to update it is not correct. You are anyway required to use fseek before switching between fread and fwrite. To know the position where to fseek to you can use index * sizeof(books). Also your file should be opened using "binary mode" (flag b) because otherwise you are going to have problems when writing binary data that may contain \n characters in the uninitialized part.
common-pile/stackexchange_filtered
Write data to influxDB for old date I am trying to write data into influx db using HTTP API. curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary "poc_test,First_Name=Ajay,Last_Name=Kumar Age=25" it successfully writes data with current point (timestemp) in DB. However I want to write data of two years back and want timestamp should also be of two years back. if there anyway I can do this, I know i can write it by putting time-stamp to the end of query but I do not know how to convert that date to time-stamp. e.g.- i have data like this First_Name:- Ajay Last_Name:- Kumar above data belongs to 17/04/2016 When i insert this data to influxdb and I hot a query to select * from poc_test name: poc_test time First_Name Last_Name Age ---- ----------- ------------- --------- **2016-04-17T00:00:00.020170822Z** Ajay Kumar 25 The timestamp format is "nanoseconds since unix epoch". So convert your dates/times to nanoseconds, using the tools of your language/environment. For example, your timestamp "2016-04-17T00:00:00.020170822Z" would be "1460851200020170689" in nanoseconds. Do you know any such tool to convert times to nenosecond. I tried with [link] (https://www.epochconverter.com/) but it is not providing the correct output. @AjayKashyap: should be possible using any high-level programming language. I assume, you're not composing those curl commands manually? No, as of now i doing poc to write legacy data to influx db, so have composed these command manually for now. but plan to automate it on later stage.
common-pile/stackexchange_filtered
Does clicking a Pokémon on the map determine if it's shiny or not? I read this comment earlier (quoted in case it's removed, emphasis mine) and If a Pokémon is shiny for someone, is it shiny for everyone in the same location?: I also can confirm this: I just had a regular Magikarp on the map. Once the battle started it was yellow and had sparkles. So yeah, you need to invoke the battle. It wasn't shiny for another person, though The same Pokémon was shiny for one person and not another. This surprises me, because Pokémon are otherwise the same for all: same IVs and same CP for people on the same level (this has been my experience, playing with my partner of the same level; however, this does not appear to be other people's experience) What determines the shiny nature of a Pokémon? Is it determined for each player when it spawns, or when a player clicks the Pokémon? Does running and re-entering affect this (i.e. does running from a shiny and reclicking it keep its shininess)? Answering the last question should be enough to answer the first two. Pokemon are not the same CP for players at the same level. I've been the same level as a friend for a while and the Pokemon around us are pretty much never the same levels. I mean EXACT Pokémon. If you are the same level, and both click the same Pokémon, to the best of my knowledge it's the same CP That's what I'm talking about. We can stand next to each other and tap on the same Pokemon in the area and they will not be the same CP. Often they are several hundred CP apart. Thanks guys! That's so odd, I play very frequently with someone of the same level, and someone of a different level, and 100% of the time the pokemon we of the same level encounter are the exact same CP. I've edited the question since that only appears to by my experience, not fact, although it does form one of the reasons the way shininess works surprised me CP stays the same if you run away and then click again, so my guess is shininess does too. It'd be to exploitable otherwise. But doesn't the fact, that once the battle is invoked the shinyness is determined, imply that we could enter a fight with a Magikarp and ongoingly escape and rejoin to trigger its shinyness? Pokemon are same CP only for players L30+, if you are L29 or below then it will be randomized CP. Based on the experiment conducted in this thread, it's shown that the shininess of a Pokémon is determined serverside rather than clientside. That is, the shininess of a Pokémon isn't determined once you click on it; it was already determined when it spawned. If this were not the case, it would be possible to constantly restart your app after encountering a Magikarp to try to have it appear as a shiny eventually. Conversely, restarting the app after encountering a shiny Magikarp would likely mean that that same Magikarp would no longer be shiny if you re-encountered it. To prove this, the OP of the above thread encountered a shiny Magikarp then restarted their app. When they re-encountered it, it was still a Shiny. I TESTED THIS THEORY AND DIDN'T WORK: I encountered a shiny magikarp and for the sake of research I risked it and reset my app. Encountered again and it was still shiny, sorry boys and girls. Here's my evolved karp :), good thing I "saved my candies". https://i.sstatic.net/z5PgS.jpg Edit + Clarification for CS people: It's determined serverside, I made the assumption that it could be clientside based on the individuality, which is a valid guess, and I was wrong based on my reverse testing. I didn't think it was out of the question impossible for it to be serverside, just my initial reasoning as to the fact that it could've been clientside.
common-pile/stackexchange_filtered
PHP doesn't insert in SQL Table even if there aren't error I am working with the ip information of who visits an html page. I don't want to use GeoIp etc: I'm going to use ipinfo.io service. At the end of html page I did a get function, and inside it I wrote the ajax post. $.get("http://ipinfo.io", function (response) { var ip = response.ip; var hostname = response.hostname; var city = response.city; var region = response.region; var country = response.country; var loc = response.loc; var org = response.org; var postal = response.postal; var details = JSON.stringify(response, null, 4); $.ajax({ type: "POST", url: 'write.php', data: '&ip=' + ip + '&hostname=' + hostname +'&city=' + city + '&region=' + region + '&country=' + country + '&loc=' + loc + '&org=' + org + '&postal=' + postal + '&details=' + details, success: function (data) { alert("Sent"); }, error: function(jqXHR, text, error){ alert("Error: not sent."); } }); }, "jsonp"); I was inspired by this: http://jsfiddle.net/zk5fn/2/ In the write.php I've written some fwrite method that wrote all the data taken. And it works. Now, I wanted to post these data in a database. I've create one with phpmyadmin with 000webhost. I display no error in the post, but where I open phpmyadmin the table is empty... why? This is the word.php: <?php $link = mysql_connect("localhost", "name......", "psw....", "database-name"); if (!$link) { alert('Could not connect: ' . mysql_error()); } echo 'Connected successfully'; // Parse input $ip = $_POST['ip']; $hostname = $_POST['hostname']; $city = $_POST['city']; $region = $_POST['region']; $country = $_POST['country']; $loc = $_POST['loc']; $org = $_POST['org']; $postal = $_POST['postal']; $details = $_POST['details']; $sql="insert into `sessions` (ip, hostname, city, region,country, loc, org, postal) values('$ip','$hostname', '$city', '$region', '$country', '$loc', '$org' ,'$postal')"; $res = mysql_query($sql); if($res){ echo "Records added successfully."; } mysql_close($link); ?> UPDATE This is the log of the post: UPDATE 2 This is the SQL seen when I click for export the database: CREATE TABLE `sessions` ( `id` int(128) NOT NULL AUTO_INCREMENT, `ip` varchar(128) COLLATE latin1_general_ci NOT NULL, `hostname` varchar(128) COLLATE latin1_general_ci NOT NULL, `city` varchar(128) COLLATE latin1_general_ci NOT NULL, `region` varchar(128) COLLATE latin1_general_ci NOT NULL, `country` varchar(128) COLLATE latin1_general_ci NOT NULL, `loc` varchar(128) COLLATE latin1_general_ci NOT NULL, `org` varchar(128) COLLATE latin1_general_ci NOT NULL, `postal` varchar(128) COLLATE latin1_general_ci NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci AUTO_INCREMENT=1 ; Does it actually echo "Records added successfully." to the screen? add "or die (mysqli_error());" at the end of your query to see whats the error This code is unsafe and exposed to sql injection attack Echo $sql after you define it to see if the values are even present in the query. I've never seen POST data passed that way; is that possible? I usually use objects {ip:ipVar, host:hostVar}. Also look into sanitizing or prepared statements to prevent sql injection. mysql_connect("localhost", "name......", "psw....", "database-name") that doesn't do what you wish/hope it should. @mark.hch I can't see that echo. @ rhopercy I added that string but I don't see any error. @ Alon i know it is not safe, i'm trying to learning better php! @ fred ofcourse in my php file i've written the real data... I wrote "name......" just for see that over there i wrote the user name of the phpmyadmin mysql_connect => http://php.net/manual/en/function.mysql-connect.php $link = mysql_connect('localhost', 'mysql_user', 'mysql_password'); 3 parameters, not 4. That's, what I was talking about. ;-) è tre parametri, non è quattro mi amico ^ ahahaha parli italiano? :D anyway, I thought it was the database name... so that is wrong? si, parlo la lingua ;-) e si, non bene. quattro parametri è per mysqli_ e no mysql_. ha bisognare tre parametri é utilizare mysql_select_db con mysql_. Grande! But i've found the real problem: simply 000webhost don't permit to modify the table with external code (as in my case) ah yes; of course. remote connection not allowed. oh well, least you found il problema. salute! ciao. Try this JS code: $.get("http://ipinfo.io", function (response) { var ip = response.ip; var hostname = response.hostname; var city = response.city; var region = response.region; var country = response.country; var loc = response.loc; var org = response.org; var postal = response.postal; var details = JSON.stringify(response, null, 4); $.ajax({ type: "POST", url: 'write.php', data: 'ip=' + ip + '&hostname=' + hostname +'&city=' + city + '&region=' + region + '&country=' + country + '&loc=' + loc + '&org=' + org + '&postal=' + postal + '&details=' + details, success: function (data) { console.log(data) }, error: function(jqXHR, text, error){ console.log(text); } }); }, "jsonp"); And this PHP code: $link = mysql_connect("localhost", "name......", "psw....", "database-name"); if (!$link) { die('Could not connect: ' . mysql_error()); } print("This is what was received: "); print("\r\n"); print_r($_POST); print("\r\n"); // Parse input $ip = $_POST['ip']; $hostname = $_POST['hostname']; $city = $_POST['city']; $region = $_POST['region']; $country = $_POST['country']; $loc = $_POST['loc']; $org = $_POST['org']; $postal = $_POST['postal']; $details = $_POST['details']; $sql = "insert into `sessions` (ip, hostname, city, region,country, loc, org, postal) values('$ip','$hostname', '$city', '$region', '$country', '$loc', '$org' ,'$postal')"; print("This is what is sending to database: \"$sql\""); print("\r\n"); $res = mysql_query($sql) or die(mysql_error()); if($res){ print("Records added successfully."); } mysql_close($link); die(); After this open the web browser developer tools on console section and see what happens. http://postimg.org/image/483gnyazx/ Ding! Read the second line: Access denied for user ... This is the error. Yeah I've read it, but lol the credential are right..... the user to login to the database is a6033757_admin and I wrote it in the mysql_connect... maybe i have to add @localhost? Has 2 possibilities: 1) The password is wrong or 2) the user has no privileges to access mysql server/database. As I can see this error is in your development mode. Certify that this user has right permissions in desired database. In example of Hostgator, you must to create database, assing an user and set their permissions (create, select, delete...) in phpmyadmin I login with the same crendtial...it is the unique account and in phpmyadmin window i can add remove etc... maybe i can't write from external file? I don't know... that stupid 000webhost let me just give the possibilty to change the password of the users, the privileges are not mentioned you were just right, i've changed host (altervista) and now it works......!!! Thank you very much for your help I would change $res = mysql_query($sql); to $res = mysql_query($sql) or die(mysql_error()); to learn more about possible errors on your insert query. Also you should apply mysql_real_escape_string to any data you're inserting into your database. It doesn't show any error :/ Anyway, I should define the variables like so: $hostname = mysql_real_escape_string($_POST['hostname']); ? I would double check your field names in your database interface (such as phpMyAdmin) to ensure you referenced them correctly in your php code. Also, yes that is the correct way to escape your post variables. If you see my answer, i've update it with code sql code I would advise the following PHP that uses MySQLi Prepared Statement: <?php $link = mysqli_connect("localhost", "name......", "psw....", "database-name"); $error = false; if (mysqli_connect_errno()) { $error = mysqli_connection_error(); echo "Connection Error: $error"; exit(); } else { if($stmt = mysqli_prepare($link, "INSERT INTO sessions (ip, hostname, city, region, country, loc, org, postal) VALUES (?, ?, ?, ?, ?, ?, ?, ?)")){ mysqli_stmt_bind_param( $stmt, "ssssssss", $_POST['ip'], $_POST['hostname'], $_POST['city'], $_POST['region'], $_POST['country'], $_POST['loc'], $_POST['org'], $_POST['postal'], $_POST['details'] ); mysqli_stmt_execute($stmt); echo "Records added successfully."; mysqli_stmt_close($stmt); } } mysqli_close($link); ?> MySQL extension was deprecated in PHP 5.5.0, and it was removed in PHP 7.0.0. Then in your success, you can do something like: success: function (data) { if(data.indexOf("Error")){ console.error(data); } else { console.log(data); } } mmmm.... I've modify my code with your and this is the console result in html file: http://postimg.org/image/tazhlcqid/ @panagulis72 The first error shows a login failure. The second part is that you are mixing mysql with mysqli and you cannot do that. Please make sure your code is correct as well as your MySQL Credentials. if you debug the PHP code, what has in $_POST? Try this: $received = print_r($_POST, true); $file = fopen('log.txt', 'w+'); fwrite($file, $received); fclose($file); After this, validade what the PHP is receiving, if need, paste here to us can help you. I don't try but I think the error is on data: '&ip=' + ip .... As this is the first key the & is unnecessary. Try to debug as I explaned and see what it is sending via ajax to PHP. Adding you code, I have no error in Console of firefox, but I can't see any new log.txt file!! Ok, this is because the PHP is receiving the $_POST and writing its contents in a file. Check in the same path of your write.php if exists the file log.txt, open it and see what was written. Yeah I've look there, but that file doesn't exist! Ok, I'll post another answer soon to you to try. Thanksss, and I still try to solve too I've update my answer with the log of the post I've changed the host with altervista. NOW IT WORKS.... I've spent 1 day for nothing.... damn! So, for who will have this problem, just don't choose 000webhost. Because with it you can create db, but you can't delete/add/modify anything using external file.
common-pile/stackexchange_filtered
Add basic console apps such as `zsh` or `nano` to the base Linux provided by Docker Desktop app I recently installed Docker Desktop app version 4.22.0 on macOS Ventura on a Mac with Apple Silicon (ARM, not Intel). I successfully pulled and ran a couple of containers: one for MySQL database server, and one for Postgres database server. Unfortunately, when I went to edit some configuration files for those servers, I was surprised to find basic utilities missing: zsh shell and nano text-editor. Within a single container, I did successfully execute: apt update apt install nano That worked for just that one single container. The other containers continue to lack the nano editor. That lack is understandable given that the purpose of containers is to be isolated from one another. I would like these utilities to appear in all my containers within Docker Desktop app. Is there a simple way to install these console apps into the underlying base Linux, to be shared across containers? Emphasis on simple. My goal in using Docker Desktop is to learn as little as possible about Linux, and about Docker. I merely want to conveniently run some servers for local app development & testing. After some web searching, the only advice I found was to "roll your own image" which seemed to mean creating and maintaining my own Linux installation — which exceeds my knowledge, my interest, and my purpose in choosing to use the Docker Desktop app. "which seemed to mean creating and maintaining my own Linux installation" lol no. You create a copy of the config file you want to change, then create a Dockerfile that basically does FROM some-base-image COPY my/config/file /etc/some/config and run docker build. Definitely easier than hacking around Docker to get it to run apt update; apt install nano in random Docker images, some of which might not even have apt (or even any package manager at all). Also these database images often are highly configurable using environment variables, but how exactly depends on the image. And also: There is no common "underlying base Linux" for these images. While there is a Linux VM being run, accessing that VM doesn't get you any closer to accessing the containers themselves. advanced shells like zsh isn't basic at all. Many distros don't even have bash and only have pure sh, ash or dash @phuclv to be fair, though, a lot of container images do contain bash (debian's default images do!) and the installed size of the zsh package, which includes a lot of completions and scripts on e.g. fedora is 8188 kB, for bash it's 8375 kB, and nobody argues that bash is the "mightier" shell (I'd very much question that, actually) The whole idea of docker images is that, no. There isn't a way. If you want to change the base layer of images, then you need to change that base layer and rebuild the image atop of it. The idea here is that image layers are immutable, so it's easy to reason about what version things are in, what is guaranteed to work at each point, and so on. So, the only advice I found was to "roll your own image" reflects what this system is supposed to do. However, that's not actually as hard as you presume. The things you need to write has four lines (my example below is just intentionally verbose!) Say you have an image named "foobar" that you want to make sure contains nano and zsh. Well, as you noted, if that foobar image uses apt for installing software (i.e., it's some debian, or debian derivative like Fedora), you can run apt installs -y zsh nano and get these. All you need to do then is make an image out of that new state. That's rather easy. Create a text file, containing but the following FROM foobar # Reminder for yourself that you're the one who built this LABEL<EMAIL_ADDRESS> # you get to pick a version, relatively freely. # If you feel like it works for you as you want, I'd recommend to start using 1.something LABEL version="0.0.1" # The "line continuation" \ at the end of each line are important; they # "swallow" the line break character, otherwise the RUN command will break. # # Set the frontend for apt to "Don't ask me any questions, please"==noninteractive; # and be -q (uiet), answer -y (es) to everything # and also don't install fancy stuff that you get recommended, let's keep this slim RUN apt-get update;\ DEBIAN_FRONTEND=noninteractive apt-get install \ --no-install-recommends -q -y \ nano \ vim \ && \ apt-get clean && apt-get autoclean save as "Dockerfile-improved-foobar", and run docker build -f Dockerfile-improved-foobar -t foobar-with-tools When you now run docker images you'll see foobar-with-tools in there! And you can use it like any other image that you get e.g. automatically from dockerhub.
common-pile/stackexchange_filtered
Restrict the number of time an application is installed on machine How does a big key product like those of MSOffices restrict their buyers/users by either the number of its installation times or the number of its installed machines ? For one software CD, the product are installed and useable via for example 3 licenses, this is activated via provided keys. Why can't I user the same key on 3 machines that are not interconnected ? For one software CD, the product is sometimes installable only on one machine (no key reference), it will display an error message if I try to install it on another. (the CD isn't rewritable "ekk") Because the software from Micrsoft requires "activation". In this procedure the software is sending a footprint of the machine together with the software key it is installed on to Microsoft's licence server. In the database of that licence server Microsoft keeps those footprints and a counter, so they can ensure their software isn't activated more often then paid for.
common-pile/stackexchange_filtered
How can I fade something to clear instead of white? I've got an XNA game which essentially has "floating combat text": short-lived messages that display for a fraction of a second and then disappear. I've recently added a gradual "fade-away" effect, like so: public void Update() { color.A -= 10; position.X += 3; if (color.A <= 10) isDead = true; } Where color is the Color int the message displays as. This works as expected, however, it fades the messages to white, which is very noticeable on my indigo background. Is there some way to fade it to transparent, rather than white? Lerp-ing towards the background color isn't an option, as there's a possibility there will be something between the text and the background, which would simply be the inverse of the current problem. Have you tried lerping to Color.Transparent (which is in the Color lib for XNA) and just gradually change the interpolation rate? @tigersnack I... didn't know there was such a thing. I'll have to try it in the morning. If you work with premultiplied alpha, the default behaviour, you have to multiply the alpha.... public void Update() { color*= 0.95f; position.X += 3; if (color.A <= 10) isDead = true; } The difference here is you're modifying the color, i.e. all values simultaneously and uniformly, rather than just the alpha value. this is right... I'm not changing only the alpha, I'm changing the color, when you multiply color(255,255,255,255) by 0.5f, you get color(128,128,128,128), if you continue doing it you will get Color(0,0,0,0) ... that is what is expected when you are working with premultiplied alpha, and it is the default behaviuour in xna 4.0 the reference is that code... color*=0.95f... it works... is easy to check... I checked. You're right! I didn't notice that you were altering the entire color, not just the alpha channel. I also edited your post so I could reverse my vote. Thanks, I've learned something today. :) by the way is the expected way of working with xna 4.0, you should use it... here is the explanation http://blogs.msdn.com/b/shawnhar/archive/2009/11/06/premultiplied-alpha.aspx You're doing exactly what you should do - changing the alpha channel of the color you use when drawing the sprite. That isn't where the problem lies. The problem is that if you just use the default SpriteBatch.Begin() behaviour, your sprite blends to its color (normally: white) rather than blending to actual transparency. What you need to do is set your BlendState to BlendState.NonPremultiplied, then everything will be fine. SpriteBatch.Begin(SpriteSortMode.FrontToBack, BlendState.NonPremultiplied) Blau's answer provides a different method (the premultiplied one) which apparently has a few advantages over this one.
common-pile/stackexchange_filtered
Unable to call gnome-terminal command in my C++ code char *mycmd = "gnome-terminal --profile 'me' -e '/usr/bin/programA --file/usr/bin/config/myconfig.ini --name="programA" --loggingLevel=1'"; popen(mycmd, "r"); Error on 1st line: error: expected ';' before 'Node' I know this is because of the "" for --name Is there anyway to get this command to work? Escape the double quotes : char *mycmd = "gnome-terminal --profile 'me' -e '/usr/bin/programA --file/usr/bin/config/myconfig.ini --name=\"programA\" --loggingLevel=1'"; Your second option won't work because the -e '/usr/bin/programA --file/usr/bin/config/myconfig.ini --name='programA' --loggingLevel=1' section is already wrapped in single quotes. You'd need to escape either type.
common-pile/stackexchange_filtered
is there a sophisticated way to grep this file I have one file. Written in BNF it could be <line>:== ((<ISBN10>|<ISBN13>)([a-Z/0-9]*)) {1,4}) For example 123456789X/abscd/1234567890123/djfkldsfjj How can I grep the ISBN10 or ISBN13 ONLY one per line even when in the line are more ISBNs. If there are more ISBNs in the line it should take only the first in line. When I grep that way grep -Po "[0-9]{9,13}X{0,1}" file then I get more lines than the file originally has. (As there could be max 4 ISBNs in line) I would also need the linecount of file should be the linecount of the grepresult. Any advices? Well, assuming the other answer offered isn't correct in assuming that the 'first' ISBN isn't at the start of line, you could always try in perl. #!/usr/bin/perl use strict; use warnings; while (<>) { chomp; my ( $first_isbn, @rest ) = m/(\d{9,13}X{0,1})/g; print $., ":", $first_isbn, "\n" if $first_isbn; } $. is the line number in perl, and so we print that and the match if there's a match. <> says read and iterate either filenames or STDIN much like grep does. So you could invoke this in a similar way to grep: perl myscript.pl <filename> Or: cat <filename> | ./myscript.pl This would one-liner-ify as: perl -lne 'my ( $first_isbn ) = m/(\d{9,13}X{0,1})/g; print $., ":", $first_isbn, "\n" if $first_isbn;' Thanks, yes The First ISBN Need not be at the beginning of the line, but I think the perl solution is correct One trivial solution is to include the beginning of the line in your regex: grep -Po "^[0-9]{9,13}X{0,1}" file This ensures that matches after the first do not satisfy the regex. It does seem from your BNF that the ISBNs, if present, are guaranteed to be the first characters of the line. Another way is to use sed: sed -n "s/\([0-9]\{9,13\}X\).*/\1/p" file This matches your pattern along with the rest of the line, but only prints your pattern. You could then use another utility to add line numbers. E.g. pipe your output to nl -nrz -w9.
common-pile/stackexchange_filtered
Can't find a fragment with a tag I have some trouble to find a fragment by a Tag: protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_logged); bottomNav = findViewById(R.id.nav_bar); bottomNav.setOnNavigationItemSelectedListener(navListener); Fragment profilFragment = new ProfilFragment(); FragmentTransaction fragTransaction = getSupportFragmentManager().beginTransaction().add(R.id.fragment_container, profilFragment,"profil_frag"); fragTransaction.addToBackStack("profil_frag").commit(); Fragment testTag = (ProfilFragment) getSupportFragmentManager().findFragmentByTag("profil_frag"); Toast.makeText(Logged.this, "test fragtag = " + testTag, Toast.LENGTH_SHORT ).show(); } The fragment is created and displayed without problem but the Toast check print "test fragtag = null". It's surely a mistake from me but I don't understand why. testTag returns a fragment not an actual text or string Also, You're using profil_frag when replacing and profil_tag in findFragmentByTag() So should I add instead of replacing ? But it's still null, so he don't found any fragment with this tag, no ? fragTransaction.commit() is not immediate, it is scheduled. Try replacing it by fragTransaction.commitNow() I checked your toast and i thin that testTag returns a fragment instead of a string , You can the following instead to check for the tag Fragment testTag =(Profilfragment) getSupportFragmentManager().findFragmentByTag("profil_frag"); Toast.makeText(Logged.this, "test fragtag = " + testTag.getTag(), Toast.LENGTH_SHORT ).show(); when I try to run this Toast the app crash, I think it's because testTag is null. `2020-07-15 15:30:18.466 12935-12935/com.example.handycatch D/AndroidRuntime: Shutting down VM 2020-07-15 15:30:18.467 12935-12935/com.example.handycatch E/AndroidRuntime: FATAL EXCEPTION: main Process: com.example.handycatch, PID: 12935 java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.handycatch/com.example.handycatch.Logged}: java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.String androidx.fragment.app.Fragment.getTag()' on a null object reference t i editted my code can you check it , type cast your fragment to your profilefragment I editted my code as you advised it to me and the problem still the same You didn't commit the transaction: fragTransaction.commit() I'm sorry I forgot to add put the commit in the question, I edited it. @RoiKrush You have to commit it before the Toast check. Commit executes the transaction and only after that the fragment can be found. Ok thx tryed that way but it's still nul. I edited the question You are missing .addToBackStack("profil_frag") Also when you doing Fragment testTag = getSupportFragmentManager().findFragmentByTag("profil_frag"); cast it to ProfilFragment
common-pile/stackexchange_filtered
The frequency tracking system you proposed for the medical cyclotron won't work beyond 100 MeV. At those energies, you're looking at nearly doubling the RF frequency as the protons accelerate. That seems excessive. In the LHC, the frequency barely changes at all during acceleration. Exactly my point - you're thinking about high-energy regimes where particles are already relativistic. But in medical applications, we're starting from rest. The revolution frequency scales with momentum over energy, so $f = \frac{pc^2}{2\pi R E}$. But that assumes the magnetic field stays constant. In a synchrotron, both the field and frequency must track together. The relationship becomes $f_{RF} = \frac{c}{2\pi R} \sqrt{\frac{B^2}{B^2 + (m_0c^2/e\rho c)^2}}$. Right, which means at low energies the denominator is dominated by the rest mass term. For protons going from 50 MeV to 1.4 GeV, that's why you see that 190% frequency swing in the booster ring systems. The real engineering challenge is maintaining cavity resonance across such a wide frequency range. Each RF mode has a specific frequency, and you can't just arbitrarily tune them without changing the field configuration entirely. That's where the phase velocity constraint becomes critical. The RF wave inside the cavity must have $v_p \leq c$ to synchronize with the particle beam. But as we tune frequency, we're effectively changing the wave propagation characteristics. I'm still not convinced the frequency swing is that dramatic. Let me work through the numbers. For a 50 MeV proton, $\gamma = 1.053$, so $\beta = 0.314$. At 1.4 GeV, $\gamma = 2.49$ and $\beta = 0.919$. So the revolution frequency ratio is just the $\beta$ ratio times any radius change. If the radius stays constant, that's indeed about a factor of three increase. Exactly. And this is precisely why lower energy machines need much more flexible RF systems. The cavity must handle not just frequency changes, but also the changing beam loading as the current and energy evolve. Which brings us back to the group velocity consideration. Energy propagates through the cavity at the group velocity, not the phase velocity. If we're constantly retuning, we're changing the dispersion relationship and affecting how power couples to the beam. The solution most facilities use is multiple cavity types or rapidly tunable systems. You can't just scale up a high-energy design and expect it to work at medical energies.
sci-datasets/scilogues
save email, country, ..., full name from facebook to database with login with facebook <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({ appId : 'myappid', channelUrl : '//www.mywebsite.com/channel.html', status : true, // check login status cookie : true, // enable cookies to allow the server to access the session xfbml : true // parse XFBML }); // Here we subscribe to the auth.authResponseChange JavaScript event. This event is fired // for any authentication related change, such as login, logout or session refresh. This means that // whenever someone who was previously logged out tries to log in again, the correct case below // will be handled. FB.Event.subscribe('auth.authResponseChange', function(response) { // Here we specify what we do with the response anytime this event occurs. if (response.status === 'connected') { // The response object is returned with a status field that lets the app know the current // login status of the person. In this case, we're handling the situation where they // have logged in to the app. // testAPI(); FB.login(function(response) { if (response.session == 'connected' && response.scope) { FB.api('/me', function(response) { window.location = "http://www.mywebsite.com/checkloginfb.php?email=" + response.email; } ); } } , {scope: 'email'}); } else if (response.status === 'not_authorized') { // In this case, the person is logged into Facebook, but not into the app, so we call // FB.login() to prompt them to do so. // In real-life usage, you wouldn't want to immediately prompt someone to login // like this, for two reasons: // (1) JavaScript created popup windows are blocked by most browsers unless they // result from direct interaction from people using the app (such as a mouse click) // (2) it is a bad experience to be continually prompted to login upon page load. // FB.login(); FB.login(function(response) { if (response.session == 'connected' && response.scope) { FB.api('/me', function(response) { window.location = "http://www.mywebsite.com/checkloginfb.php?email=" + response.email; } ); } } , {scope: 'email'}); } else { // In this case, the person is not logged into Facebook, so we call the login() // function to prompt them to do so. Note that at this stage there is no indication // of whether they are logged into the app. If they aren't then they'll see the Login // dialog right after they log in to Facebook. // The same caveats as above apply to the FB.login() call here. FB.login(function(response) { if (response.session == 'connected' && response.scope) { FB.api('/me', function(response) { window.location = "http://www.mywebsite.com/checkloginfb.php?email=" + response.email; } ); } } , {scope: 'email'}); } }); }; // Load the SDK asynchronously (function(d){ var js, id = 'facebook-jssdk', ref = d.getElementsByTagName('script')[0]; if (d.getElementById(id)) {return;} js = d.createElement('script'); js.id = id; js.async = true; js.src = "//connect.facebook.net/en_US/all.js"; ref.parentNode.insertBefore(js, ref); }(document)); // Here we run a very simple test of the Graph API after login is successful. // This testAPI() function is only called in those cases. function testAPI() { console.log('Welcome! Fetching your information.... '); FB.api('/me', function(response) { console.log('Good to see you, ' + response.name + '.'); }); } </script> I want to give my users the option to login with facebook, but I do not know why it does not work. I get infinitly popups. This is my first time doing this and I just can not understand me. I want to get the email, country, profile picture and full name of the user so I can add it to database. Any help is appriciated. Thanks You are trying to login the user even if the user is already connected. This creates the infinite loop cycles. // Here we specify what we do with the response anytime this event occurs. if (response.status === 'connected') { // The response object is returned with a status field that lets the app know the current // login status of the person. In this case, we're handling the situation where they // have logged in to the app. // testAPI(); FB.login(function(response) { if (response.session == 'connected' && response.scope) { FB.api('/me', function(response) { window.location = "http://www.mywebsite.com/checkloginfb.php?email=" + response.email; } ); } } , {scope: 'email'}); } I would recommend separating, FB.Event.subscribe('auth.authResponseChange', function(){} from FB.Login(function(){}, {}). The auth.authResponseChange will fire anytime the user's authentication status has changed, while the FB.Login attempts to get the user's permission and authorize the application etc.
common-pile/stackexchange_filtered
How to show components in the content area of home component instead of separate page in Angular 5 I am new to Angular 5 ,Here I am facing a problem with component routing . What I want to do is ,When a user open the app first it should show a login screen (login screen with full height and width of browser window).Once the user is successfully validated then the user get into the Home Component. Here Home component has Toolbar and Side menu bar ,If user selected any any from side menu bar I want to show the relevant(component) data in the content area of the home component. As of now everything works fine ,I mean when user opens the app login screen first displayed and successfully validated Home page displayed to the user . Problem occurs when user select any menu from side menu bar the respected component not showing in the content area of the Home component ,it opens as a separate component and takes full screen. home.component.ts <mat-sidenav-container class="sidenav-container"> <mat-sidenav #drawer class="sidenav" fixedInViewport="true" [attr.role]="(isHandset$ | async) ? 'dialog' : 'navigation'" [mode]="(isHandset$ | async) ? 'over' : 'side'" [opened]="!(isHandset$ | async)" style="background:black"> <mat-toolbar class="menuBar">Menus</mat-toolbar> <mat-nav-list> <a class="menuTextColor" mat-list-item routerLink="/settings">Link 1</a> </mat-nav-list> </mat-sidenav> <mat-sidenav-content> <mat-toolbar class="toolbar"> <button class="menuTextColor" type="button" aria-label="Toggle sidenav" mat-icon-button (click)="drawer.toggle()" *ngIf="isHandset$ | async"> <mat-icon aria-label="Side nav toggle icon">menu</mat-icon> </button> <span class="toolbarHeading">Application Title</span> </mat-toolbar> //Content area ,need to show the components related to the side menu </mat-sidenav-content> </mat-sidenav-container> app.components.ts <router-outlet></router-outlet> In app.component.ts I have the app.module.ts const routings: Routes = [ { path: '', redirectTo: '/login', pathMatch: 'full' }, { path: 'login', component: LoginComponent }, { path: 'home', component: HomeComponent }, { path: 'settings', component: SettingsComponent }, { path: '**', redirectTo: '/login', pathMatch: 'full' }, ]; here I have defined the routes . can anyone help me to fix this . What you want is a child route. The home component needs to have a <router-outlet></router-outlet> as well as your app component. Then you will want to create a new component to hold the content you want to replace in your main component. <mat-sidenav-container class="sidenav-container"> <mat-sidenav #drawer class="sidenav" fixedInViewport="true" [attr.role]= (isHandset$ | async) ? 'dialog' : 'navigation'" [mode]="(isHandset$ | async) ? 'over' : 'side'" [opened]="!(isHandset$ | async)" style="background:black"> <mat-toolbar class="menuBar">Menus</mat-toolbar> <mat-nav-list> <a class="menuTextColor" mat-list-item routerLink="/settings">Link 1</a> </mat-nav-list> </mat-sidenav> <mat-sidenav-content> <mat-toolbar class="toolbar"> <button class="menuTextColor" type="button" aria-label="Toggle sidenav" mat-icon-button (click)="drawer.toggle()" *ngIf="isHandset$ | async"> <mat-icon aria-label="Side nav toggle icon">menu</mat-icon> </button> <span class="toolbarHeading">Application Title</span> </mat-toolbar> // The new part <router-outlet></router-outlet> </mat-sidenav-content> </mat-sidenav-container> Then update your routes to something like this: const routings: Routes = [ { path: '', redirectTo: '/login', pathMatch: 'full' }, { path: 'home', component: HomeComponent, children: [ { path: '', component: NewComponent }, { path: 'settings', component: SettingsComponent }, ] }, { path: 'login', component: LoginComponent }, { path: '**', redirectTo: '/login', pathMatch: 'full' }, ]; The route /home will look the same as it used to even thou it is now the new component wrapped by the home component. The route /home/settings will have your settings component wrapped by your home component. You should use something like this. Create a app.component.html and app.component.ts, in your app.component.html write code as : <app-header *ngIf=" this.session_data != null " ></app-header> <mz-progress [backgroundClass]="'grey lighten-4'" *ngIf=" progress == true"></mz-progress> <router-outlet></router-outlet> <app-footer></app-footer> this will bring the header and footer on every page as default or you can put your condition over here as per pages and thee pages on which you will redirect will be loaded under the <router-outlet></router-outlet>. Hence you can put your sidebar here as header or footer and write your html & .ts code for your header or footer or sidebar as separate components. In your header/sidebar.html, write : <div class="app-header" *ngIf="header == true"> //write your html here </div> Now, in your app.module.ts, load your AppComponent as: bootstrap: [AppComponent], and define your routes as : export const routes: Routes = [//routing definations { path: '', component: LoginComponent, canActivate: [Guard1] }, { path: 'dashboard', component: DashboardComponent, canActivate: [Guard] }, { path: 'afterDashboard', component: AfterDashboardComponent, canActivate: [Guard, DateGuard] }, { path: 'formlist', component: FormlistComponent, canActivate: [Guard, DateGuard] } }, ]; Hope this helps, what you have asked, i have achieved same by using this.
common-pile/stackexchange_filtered
Why is my GUI freezing? I'm new in TPL world, and I did that code: var myItems = myWpfDataGrid.SelectedItems; this.Dispatcher.BeginInvoke(new Action(() => { var scheduler = new LimitedConcurrencyLevelTaskScheduler(5); TaskFactory factory = new TaskFactory(scheduler); foreach (MyItem item in myItems) { Task myTask = factory.StartNew(() => DoLoooongWork(item) ).ContinueWith((t) => { Debug.WriteLine(t.Exception.Message); if (t.Exception.InnerException != null) { Debug.WriteLine(t.Exception.InnerException.Message); } }, TaskContinuationOptions.OnlyOnFaulted); } }), null); The only one access to gui is "var myItems = myWpfDataGrid.SelectedItems;" and it is read only! The function "DoLoooongWork()" does access to serial ports, etc. It's a separated SDK function that doesn't access the GUI. I know that "Dispatcher.BeginInvoke" is a bit redundant, but I don't know what I can do, or what I'm doing wrong. The only reason to this code is to free the GUI while "DoLoooongWork()" executes, but the GUI is frozen! What's wrong with that code? edit Thanks to @Euphoric help, I discovered the problem that is similar to that post: COM Interop hang freezes entire COM system. How to cancel COM call Did you try doing it without the factory, eg. Just new Task and Start? And without the dispatcher. Something like http://blog.yojimbocorp.com/2012/05/22/using-task-for-responsive-ui-in-wpf/ @Euphoric Yes, I did. In truth, my original code is without the factory and without the dispatcher. I added in my despair :) Does the freeze happen when you replace DoLoooongWork with Thread.Sleep? @Euphoric Man, you are a genius! I replace with Thread.Sleep(60), and the GUI is free! I'm happy and sad... This is a direct call to an Interop DLL, that talks with serial ports and etc... How is it possible freeze the GUI? @Euphoric Oh, please, promote your comment to answer and I can mark as correct :) I know that my problems are not resolved, but you answered correctly the question! @ClickOk, maybe if you show what's exactly going on inside DoLoooongWork, we might be able to help further. Here's a somewhat similar problem, FYI: http://stackoverflow.com/q/21211998/1768303. And yes, Dispatcher.BeginInvoke is redundant here. @Noseratio: It's zkemkeeper SDK. The code is only "zkemkeeper.CZKEM axCZKEM1 = new zkemkeeper.CZKEM();" and "axCZKEM1.Connect_Net(ip, port);" I read your link and replaced my LimitedConcurrencyLevelTaskScheduler with ThreadAffinityTaskScheduler and it didn't works... sorry my "noobism" here; I'm a mere TPL user and you know a lot of core concepts :) @ClickOk, check my updated answer with more thoughts about zkemkeeper SDK. I had a hunch that something working with serial port would try to use application's event loop to do it's work. So it actually bypasses the whole dispatcher and thread system and blocks the application. I'm not experienced in this field so I don't know how to solve it, but this is different question. I presume some objects inside DoLoooongWork require thread affinity and message pumping. Try my ThreadWithAffinityContext and see if helps, use it like this: private async void Button_Click(object sender, EventArgs e) { try { using (var staThread = new Noseratio.ThreadAffinity.ThreadWithAffinityContext( staThread: true, pumpMessages: true)) { foreach (MyItem item in myItems) { await staThread.Run(() => { DoLoooongWork(item); }, CancellationToken.None); } } } catch (Exception ex) { MessageBox.Show(ex.Message); } } More info about ThreadWithAffinityContext. [UPDATE] You mentioned in the comments that the code inside DoLoooongWork looks like this: zkemkeeper.CZKEM axCZKEM1 = new zkemkeeper.CZKEM(); axCZKEM1.Connect_Net(ip, port); I never heard of "zkemkeeper" before, but I did a brief search and found this question. Apparently, Connect_Net only establishes the connection and starts a session, while the whole communication logic happens asynchronously via some events, as that question suggests: bIsConnected = axCZKEM1.Connect_Net("<IP_ADDRESS>", Convert.ToInt32("4370")); if (bIsConnected == true) { iMachineNumber = 1; if (axCZKEM1.RegEvent(iMachineNumber, 65535)) { this.axCZKEM1.OnFinger += new kemkeeper._IZKEMEvents_OnFingerEventHandler(axCZKEM1_OnFinger); this.axCZKEM1.OnVerify += new zkemkeeper._IZKEMEvents_OnVerifyEventHandler(axCZKEM1_OnVerify); // ... } } That would be a whole different story. Leave a comment if that's the case and you're still interested in some solution.
common-pile/stackexchange_filtered
If the diagram is commutative, $f$ is one-one and $g$ is onto. Let $f:A \to B$ , $g:B \to A$, and $\operatorname{id}:A \to A$ be the identity. If the diagram $\hspace{4cm} $ commutes, prove that ${ f}$ is one-one and ${ g}$ is onto. THEOREM If the diagram above commutes, then $ f$ is one-one and $g$ is onto. PROOF The commutativity of the diagram implies that $${g f}(x)=x \;; \forall x\in A$$ thus $gf $ is one-one and onto. $(1)$ $ f$ is one-one. Assume that ${ f}(x)={ f}(y)$.Then $${ f}(x)={ f}(y)$$ $${ gf}(x)={ gf}(y)$$ $$x=y$$ so that $ { f}$ is one-one. $(2)$ $g$ is onto. If it wasn't the case $gf $ wouldn't be onto, which is impossible. I don't know wether the last "proof" is right. I can see why it must be the case but I'm not able to exlain it and give a clear proof like that of $f$ being one-one which seems very clear. An absolutely watertight way to prove that $g$ is onto to take $a\in A$, and show that there is a $b\in B$ for which $g(b)=a$, and this should not be hard to find ... For sure, $f(A)\subseteq B$. @Sigur Sorry, yes. I was overthinking it too much! Start with $a\in A$ (on the right of the diagram). Then pull back to $a\in A$ on the left of the diagram. Then apply $f$ to this element $a$, we know $gf(a) = a$. thus, for any $a \in A$, $b = f(a)$ is the element we seek. @DavidWheeler I had written that, but I wasn't sure. I sometimes overtink this results that are almost tautological, and in this case I was worrying about $f$ being onto or not, because I was thinking about $a$ in the west-side $A$ and not in the east-side $A$. Thanks anyways. You might find helpful this post. Theorem. Let $A$ and $B$ be sets. Consider the following three statements: $f\colon A\to B$ is one-to-one. There exists $g\colon B\to A$ such that $gf = \mathrm{id}_A$ ($f$ has a left inverse). For every set $C$ and every functions $h,k\colon C\to A$, if $fh = fk$ then $h=k$ ($f$ is left-cancellable). Then 2$\implies$3$\iff$1. Moreover, if $A$ is nonempty, then 1$\implies$2 so all three are equivalent. Proof. 2$\implies$3. Let $h,k\colon C\to A$ be such that $fh = fk$. Let $g$ be the function guaranteed by 2. Then $$h = \mathrm{id}_Ah = (gf)h = g(fh) = g(fk) = (gf)k = \mathrm{id}_Ak = k.$$ Thus, $f$ is left-cancellable. 3$\implies$1. Let $a,a'\in A$ be such that $f(a)=f(a')$. We need to prove that $a=a'$. Let $C=\{0\}$, $h\colon C\to A$ be given by $h(0)=a$, and $k\colon C\to A$ be given by $k(0)=a'$. Then $fh(0) = f(a) = f(a') = fk(0)$. so $fh=fk$. Since $f$ is left-cancellable, we conclude that $h=k$. Hence $a = h(0) = k(0) = a'$, proving that $f$ is one-to-one. 1$\implies$3. Let $C$ be a set and $h,k\colon C\to A$ be such that $fh = fk$. We need to show that $h=k$. Let $c\in C$. Then $f(h(c)) = f(k(c))$; since $f$ is one-to-one, we conclude that $h(c)=k(c)$. Since $h(c)=k(c)$ for all $c\in C$, it follows that $h=k$. 1$\implies$2 if $A$ is nonempty and $B$ are nonempty. Since $A$ is nonempty, there exists $a_0\in A$. Define $g\colon B\to A$ as follows: $$g(b) = \left\{\begin{array}{ll} a &\text{if }b\in f(A)\text{ and }f(a)=b;\\ a_0&\text{if }b\notin f(A). \end{array}\right.$$ This is well-defined, since $f(a)=f(a')=b$ implies $a=a'$. And if $b\in f(A)$, then there exists $a\in A$ such that $f(a)=b$. Now, let $a\in A$. Then $g(f(a)) = a$, so $gf = \mathrm{id}_A$, as desired. $\Box$ In particular, given a function $f\colon A\to B$, if a $g$ exists that makes the diagram commute, then $f$ is one-to-one. Conversely, if $f$ is one-to-one and $A$ is nonempty (or $A$ and $B$ are both empty), then we can find a $g$ that makes the diagram commute. Here's the dual: Theorem. Let $A$ and $B$ be sets. Consider the following three statements: $f\colon A\to B$ is onto. There exists $g\colon B\to A$ such that $fg=\mathrm{id}_B$ ($f$ has a right inverse). For every set $C$ and every functions $h,k\colon B\to C$, if $hf = kf$, then $h=k$ ($f$ is right-cancellable). Then 2$\implies$3$\iff$1. Moreover, the implication 1$\implies$2 is equivalent to the Axiom of Choice. Proof. 2$\implies$3 Let $C$ be a set and let $h,k\colon B\to C$ be such that $hf=kf$. Let $g$ be the function guaranteed by 2; then: $$h = h\mathrm{id}_B = h(fg) = (hf)g =(kf)g = k(fg) = k\mathrm{id}_B = k.$$ 3$\implies$1. Let $C=\{0,1\}$ and define $h\colon B\to C$ by $h(b)=1$ for all $b$, and $$k(b) = \left\{\begin{array}{ll} 1 & \text{if }b\in f(A);\\ 0 &\text{if }b\notin f(A). \end{array}\right.$$ Then $hf = kf$, hence by 3 we have $h=k$. Therefore, $k(b)=1$ for all $b\in B$, hence $f(A)=B$; that is, $f$ is onto. 1$\implies$3. Suppose that $C$ is a set and $h,k\colon B\to C$ are functions such that $hf = kf$. Let $b\in B$; we need to show $h(b)=k(b)$. Since $f$ is onto, there exists $a\in A$ such that $f(a)=b$. Therefore, $$h(b) = h(f(a)) = hf(a) = kf(a) = k(b).$$ Thus, $h=k$. If the Axiom of Choice holds, then 1$\implies$2: If $f$ is onto, then for every $b\in B$, the set $f^{-1}(b) = \{a\in A\mid f(a)=b\}$ is nonempty. By the Axiom of choice, there exists a function $g\colon B\to \cup_{b\in B} f^{-1}(b)$ such that $g(b)\in f^{-1}(b)$ for each $b\in B$. I claim that $fg=\mathrm{id}_B$. Indeed, for every $b\in B$, $g(b)\in f^{-1}(b)$, so $f(g(b))=b$. If 1$\implies$2 holds, then the Axiom of Choice holds. Let $\mathcal{X}=\{A_i\}_{i\in I}$ be a nonempty family ($I\neq\varnothing$) of nonempty sets ($A_i\neq\varnothing$ for each $I\in I$). We need to show that there exists a function $g\colon I\to\cup_{i\in I}A_i$ such that $g(i)\in A_i$ for each $i\in I$. Let $B_i = A_i\times\{i\}$. Note that the family $\mathcal{Y}=\{B_i\}_{i\in I}$ consists of pairwise distinct sets. Let $Y=\cup_{i\in I}B_i$, an define $f\colon Y\to I$ by $f(b_i,i) = i$ (projection onto the second component). The map is onto, since each $A_i$ is nonempty, so $B_i\neq\varnothing$. By our assumption that 1$\implies$2, there exists $h\colon I\to Y$ such that $h(i)\in B_i$ for each $i\in I$. Let $\pi_i\colon B_i\to A_i$ be the projection onto the first coordinate, $\pi_i(b_i,i) = b_i$. Define $g\colon I\to \cup_{i\in I}A_i$ by $$g(i) = \pi_i\circ h (i).$$ Since $h(i)\in B_i=A_i\times\{i\}$, then $\pi_i\circ h(i) \in A_i$. This holds for each $i$, so $g$ is the desired choice function. $\Box$ In particular, if the diagram commutes, then $g$ is onto. Conversely, assuming the axiom of choice, if $g$ is onto, then we can find an $f$ that fits into the diagram and makes it commute. Thank you so much! I'll read this with some more time. I'm reading this now, as promised. Thank you again, Arturo. $fg=\operatorname{id}_A$ should read $fg=\operatorname{id}_B$, correct? Also $(hf)g=(kf)g=k(fh)$ should read $(hf)g=(kf)g=k(fg)$-. The Axiom of Choice part is still fuzzy, but that is my fault. @Peter: Yes on the first two; fixed. What books do you recommend on Set Theory? I read a little out of Halmos' book. @PeterTamaroff: For basic level set theory, Halmos; for higher level, Hrbacek or Jech (the latter is very high level). OK. I will keep insisting with Halmos' book then! He seems to have few exercises which would come in handy. @PeterTamaroff: There was a very nice small book that had a lot of exercises, and was organized to follow Halmos's book (which suffers for lack of exercises): "Exercises in Set Theory", by L.E. Sigler; unfortunately, it is long out of print, but you might get lucky in some libraries or with BookFinder. Great. Thanks for all this. And one year later I now understand this all. For $a\in A$, take $b=f(a)\in B$ such that $g(b)=g(f(a))=a$.
common-pile/stackexchange_filtered
Approximating an integral with the value at some uniformly distributed points Let $f(x)$ defined on $[0, 1]$ be a smooth function with sufficiently many derivatives. $x_i = ih$, where $h =\frac{1}{N}$ and $i = 0,1,\cdots,N$ are uniformly distributed points in $[0, 1]$. What is the highest integer $k$ such that the numerical integration formula $$I_N=\frac{1}{N}(a_0(f(x_0)+f(x_N))+a_1(f(x_1)+f(x_{N-1}))+\sum_{i=2}^{N-2}f(x_i))$$ is $k$-th order accurate, namely $$\vert I_N-\int_{0}^{1}f(x)dx\vert\le Ch^k$$ for a constant $C$ independent of $h$? Please describe the procedure to obtain the two constants $a_0$ and $a_1$ for this $k$. $$$$ For this problem, one way is to use the Euler-Maclaurin formula and compare the coefficients to obtain the final result. My question is, what if I don’t know the Euler-Maclaurin formula? What’s the intuition for this problem? Are there any natural ways to come up with the solution? Thanks! Edit1. We may apply the composite trapezoidal rule, but using composite trapezoidal rule will obtain 2nd derivative which (unfortunately) is not involving $f(x_1)$ and $f(x_{N-1})$, which is not what we want. I don’t know if the composite trapezoidal rule works, maybe this intuition is wrong When you don't know what to do in general, then you look for simpler examples. Analyze the special cases first. What happens when $N=1$? What is the best that can be done in this situation? What happens when $f$ is zero in a neighborhood of $0$ and zero in a neighborhood of $1$? What rule are you left with in this case?
common-pile/stackexchange_filtered
How to add a local plugin with cordova when i use the command :ionic cordova plugin add MyPlugin .it throw an error : Error: Invalid Plugin! MyPlugin needs a valid package.json but i have used the commend :plugman createpackagejson MyPlugin ,MyPlugin is the local design plugin ,but add fail. ionic version :3.8.0 cordova version:7.0.1 Finally solved the problem, it was cordova's environment went wrong, reinstalled and then it was ok, and the version number did not change!!! I answered this question a few years ago ;-) https://stackoverflow.com/questions/30345035/resolved-how-to-add-a-local-cordova-plugin-with-ionic-ionic-2-ionic-3-io
common-pile/stackexchange_filtered
how to convert this uri file://mnt/ to this uri content://media I posted a similar question, but I have not received an answer that solves the problem. convert url from sdcard to content uri i need simple code that convert this format: file:///mnt/sdcard/Movies/Your_voice/Your_voice080513_141510.mp4 not this forma content://media/external/video/media/2308 to this format : content://media/external/video/media/2308 i saw those related Q : How to convert a file:// uri into content:// uri? in the oposit side Android: Getting a file URI from a content URI? Try using MediaStore.Video.Media.getContentUri (String volumeName)
common-pile/stackexchange_filtered
What's the easiest website registration process? Are there other ways of confirming a user registration other than a confirmation email? I'm finding a lot of confirmation emails are getting sent to users junk mail folders and flooding our customer service with calls. Current solution is to simply ask for email twice in the form and not bother sending out the confirmation and validation email. Is there a better way? Why not just stick with convention and use the "confirm email address" method? Might it also be worth displaying a message to the user after the form is submitted to say something along the lines of "please check your spam folder if the email does not appear within X minutes"? What are you trying to accomplish by your website registration process? Are you trying to gather information on your users? Guarantee that this is their only account on your site? Authenticate that they are who they say they are? If there is any possible way to not require registration at all then you've just made billions of potential users. Henry is asking all the right questions. "I'm not here to be in a relationship!" -- http://www.uie.com/articles/three_hund_million_button Trying to gather information on our users and have the ability to communicate back with them on product updates, etc. We tried the "no registration process" for a while where people could simply be users and have no requirement to register. However, business objectives have changed from the higher ups. I should add too that there is still an option to be a user without registering. I'm just trying to make sure those who do opt in to register are taken through the simplest method possible. One alternative would be to use OAuth to leverage one or more social network authentication services. These social networks have already taken the user through the email confirmation process. It is a complicated process for you, the programmer, but many CRM's and cloud services have ways of making it much easier. Meanwhile, it is simplicity incarnate for your users because they use the same credentials to get on your site as they do to get on facebook or twitter. You never see those credentials, but you can get other information from the social network's database, allowing you to personalize each visitor's experience of your site. Best yet, if the visitor changes their credentials on the social network, your site automatically accepts the new credentials, all without the additional burden of keeping track of the visitor's information.
common-pile/stackexchange_filtered
Why are some upvotes missing? I answered some questions and i can see that others have up voted my answer but I don't see any points added to my reputation and the total reputation points are the same! I took a picture of it. Do you know what is happening? :D Had you possibly already reached the limit of 200/day (not including acceptances)? I see you've passed that by now. Yeah, i've reached 215. Is that the limit for a day? The limit from upvotes is 200 a day. Reputation from accepted answers, accepting an answer on your own question and receiving a bounty are immune to this reputation cap. You can earn a maximum of 200 reputation per day from any combination of the activities below. Only bounty awards and accepted answers are not subject to the daily reputation limit. Thanks @Mari-LouA Reaching the daily reputation cap also earns you mortarboard badge.
common-pile/stackexchange_filtered
MariaDB slow log - sleeping state queries implementation I'm using MariaDB 10.0.36, and I see a lot of open threads in sleep under the command column, so I guess that the application doesn't close the connection to the DB, Eventually, the thread will close after reaching the time out of 300 seconds. I want to know what queries are executed at the start of these threads. I tried to run slowlog with this configuration: log_slow_filter admin,filesort,filesort_on_disk,full_join,full_scan,query_cache,query_cache_miss,tmp_table,tmp_table_on_disk log_slow_rate_limit 100 log_slow_verbosity innodb,query_plan,explain slow_launch_time 2 slow_query_log OFF slow_query_log_file slowlog240924.log long_query_time 100.000000 Example from the slow log: # User@Host: x @ x # Thread_id: 9248835 Schema: x QC_hit: No # Query_time: 0.000067 Lock_time: 0.000013 Rows_sent: 1 Rows_examined: 1 # # explain: id select_type table type possible_keys key key_len ref rows Extra # explain: 1 SIMPLE documents const PRIMARY PRIMARY 8 const 1 # SET timestamp=1727181672; select * from documents where documents.id = 97863 limit 1; When I walk over the slow log I see that all the queries that were logged are with Query_time: of less than a second. My question is: those queries I see with short execution times are the ones I'm looking for just with their real execution time and it catches them because the thread ended after 300 seconds or something is wrong with my configuration? long_query_time = 100.000000 seconds will give you only very long queries. "1" is useful for finding naughty queries; "0". Consider summarizing the slowlog via pt-query-digest or mysqlslow. See SlowLog The "general log" gives you all queries from all connections. Caution: it can write lots to disk. select * from documents where documents.id = 97863 limit 1; should take only millisecond(s), assuming PRIMARY KEY(id). Thanks for the reply Rick. But why do I see queries that run in less than 100 sec in my slow log? Maybe the reason is the log_slow_filter? For example, if a query runs in less than 100 seconds but uses filesort, will I see this query in the slow log? @Agar123 - Probably some of the items in log_slow_filter apply. I prefer to turn off all flags and set the time low. See https://mariadb.com/kb/en/server-system-variables/#log_slow_filter Thanks, from what im reading only the filter not_using_index is applied if it doesn't exceed the threshold. But the query above used an index and was still logged; what could be the cause? @Agar123 - I see several items in log_slow_filter,
common-pile/stackexchange_filtered
Why does casting an array of charcters return a long number instead of ASCII value? I am trying to cast an array of characters to int (or the corresponding ASCII value), when I try to convert a single char to int works fine but when I try to do it with an array it doesn't quite work. Why is that? What am I doing wrong? This is an example of a single char to int: char inputCharacter; int ASCIIValue; scanf("%c", &inputCharacter); ASCIIValue = (int) inputCharacter; printf("%i", ASCIIValue); Input: a Output: 97 This is an example of a char array to int: char inputCharacter[10] int ASCIIValue; scanf("%c", &inputCharacter); ASCIIValue = (int) inputCharacter; printf("%i", ASCIIValue); Input: a Output: 5896032 Expected Output: 97 What do you expect to happen if you convert an array of multiple characters into a single numeric value? ASCIIValue = inputCharacter[0] will get you the expected value for the following printf call. name of the array is a pointer to the array, which has the same value as pointer to the first element of the array, so here the base address of inputCharacter is being casted to int and hence the behaviour. @RinkeshP: The name of an array is not a pointer to the array. In many contexts, an array is converted to a pointer to the first element of the array (which is different from a pointer to the array), but this does not occur in all contexts, and students should learn the correct rules and not be taught that the name of an array is a pointer to the array. @EricPostpischil Yes you are right. I should have framed it better. Matteo, Curious: given "Why does casting an array of charcters return a long instead of ASCII value?", why did you think (int) inputCharacter returned a long and not an int? @chux-ReinstateMonica It was supposed to be long number, not long. I will edit it now! @Matteo To improve clarity, if you do not mean the type long, consider some word other than long. @chux-ReinstateMonica it was a typo unfortunately. I have fixed it. Given all the answers in the posts seems that what I did was print out the address to the pointer of the first element. Said so what would be the best approach to convert a char array to an int array? With a loop iterating through every item in the array? @Matteo Read my answer again Actually, you are printing the base address (&) of the first element of inputCharacter, which is the 0th element. The address of a variable is of type size_t, but you are using "%i" format specifier in printf(). Correct format specifier to addresses is "%p". To get ASCII value, you need to de-reference it like: int ASCII_0 = (int)inputCharacter[0]; . . . int ASCII_9 = (int)inputCharacter[9]; Or, better use a loop. Converting char array to int array: int ASCII_vals[10] = {}; for(int i = 0; i < sizeof(inputCharacter)/sizeof(inputCharacter[0]); i++) { ASCII_vals[i] = inputCharacter[i]; } Edit: You can get the size of inputCharacter, using sizeof() operator. int len = sizeof(inputCharacter) / sizeof(inputCharacter[0]); With char inputCharacter[10], inputCharacter is an array. When an array is used, as in (int) inputCharacter, the array is converted to the address of the first element*1. It is like (int) &inputCharacter[0]. That address, which may be wider than an int, is converted to an int, not a long as in the question. In OP's case, that result was 5896032. The address has nothing to do with the content of the array. Expecting 97 is not supported. Tip: save time and enable all warnings.... char inputCharacter[10]; scanf("%c", &inputCharacter); ... may generate a warning like warning: format '%c' expects argument of type 'char *', but argument 2 has type 'char (*)[10]' [-Wformat=] This hints the something is amiss. *1 Except when it is the operand of the sizeof operator, or the unary & operator, or is a string literal used to initialize an array, an expression that has type “array of type” is converted to an expression with type “pointer to type” that points to the initial element of the array object and is not an lvalue. C17dr § <IP_ADDRESS> 3
common-pile/stackexchange_filtered
Wrong opacity transition behaviour in Chrome when loading CSS from file? Not sure if I'm doing something wrong here or whether this indeed is a Chrome rendering bug. Here is my very small example: .hover-test span { opacity: 0; transition-property: opacity; transition-duration: 1000ms; } .hover-test:hover span { opacity: 1; } <!doctype html> <html lang="en"> <head> <meta charset="UTF-8"> <title>TEST opacity</title> <meta name="viewport" content="width=device-width,initial-scale=1"/> <link href="styles.css" rel="stylesheet"> </head> <body> <button class="hover-test">hover me<span>hidden</span></button> </body> </html> It works in all browsers I checked like expected. It does work in Chrome as well, when I put the CSS in a style tag directly in the HTML file. It does NOT work in Chrome (91.0.4472.101), when I put the CSS in a separate file and include it with a link tag. With "not working" I mean, that on page load the span is shown and then faded out, without the mouse cursor being near the button. Is this a Chrome bug, or am I doing something wrong here? How can I achieve the desired behaviour in Chrome, which is: span is hidden on page load and only shown/hidden on hover? This seems weird. In what order do you include the css rules? Actually no order, just one styles.css file with the CSS from my question This should not be happening. If I were to guess, I'd say you have something else conflicting with the opacity, before the .hover-span test rule, causing your css to load the item with opacity of 1 and then animate it to opacity 0. A general span rule perhaps or even a body rule. I'd search for all the opacity rules and go from there I created two files with exactly the code from above. No other code involved. Uploaded it here: https://helhum.io/opacity-test.html On a webserver with caching I can only reproduce it, when the CSS file des not come cache, thus it only happens on a forced reload of the page (shift + reload button). Still weird though IMHO This is a timing issue, the html loads well before the css. That's why you only see this on the first load. If you want to "fix" this, you can hide the body or the html inside the html and have the css file show it. You can use display,visibility, opacity or whatever Actually it is a chrome bug, which was fixed 11 days ago: https://bugs.chromium.org/p/chromium/issues/detail?id=332189 But nevertheless I will apply a workaround similar to the one you suggested for now It looks like it is a Chrome bug,as written here: https://www.hawkbydesign.com/weird-google-chrome-css-transition-on-load-bug/ Well, after making some further updates and refreshing the page, I noticed that the transition was firing on page load. What I mean by this is instead of being hidden on page load, as they should be, the elements were visible and would transition to their hidden state. this is exactly the problem reported. More: The bug happens whenever you don’t have any script tags on the page, apparently. For whatever reason, this causes css transitions to trigger upon page load. While I was also digging, it appears that this happens sometimes with the form tag as well. What a weird bug! The solution is to include a script tag in your page. Whenever I found the solution, they said to include a space in the script tag, but I found that it works fine even without the space. I actually added jQuery on the page using the CDN link and the bug seems gone. Actually on my production page I had a JS include, but used defer Removing the defer from this script tag (before closing body tag) solved the issue as well for me. You appear to be bumping up against a timing problem. Try this code with your styles file: <!doctype html> <html lang="en"> <head> <meta charset="UTF-8"> <title>TEST opacity</title> <meta name="viewport" content="width=device-width,initial-scale=1"/> <link href="style.css" rel="stylesheet"> <style> </style> </head> <body> <script> function insert() { document.body.innerHTML = '<button class="hover-test">hover me<span>hidden</span></button>'; } window.onload = insert; </script> </body> </html> This waits for loading before putting the button in the document and on Chrome (and Edge) on Windows10 at least all is well. Chrome/Edge seem to differ from say Firefox in whether loading is synchronous or not - or maybe it's just a lot faster writing the document. Obviously this cannot be a solution for a production website :) Agreed, but it's an explanation.
common-pile/stackexchange_filtered
Animation: How do I make animation to repeat after I visit another activity and come back? I was wondering how to make an animation repeat after comeback on the specific page. My animation basically translates a RelativeLayout a few pixels down when you visit a certain activity. But if I then click on a button that sends me on a different page and then hit the back button to return, the TranslationAnimation doesn't start again. Here is the code: RelativeLayout r1; r1 = findViewById(R.id.r1); TranslateAnimation a = new TranslateAnimation(0,0,-10f,0); a.setDuration(800); a.setFillAfter(true); r1.startAnimation(a); How exactly do I make this animation restart every single time I visit my activity? are you using this code inside 'onCreate()'? if yes use that code inside onResume() Make separate method in activity void myanimation(){ TranslateAnimation a = new TranslateAnimation(0,0,-10f,0); a.setDuration(800); a.setFillAfter(true); r1.startAnimation(a); } then call the method inside of activity onResume @Override public void onResume(){ super.onResume(); myanimation(); } This is the solution, pretty silly of me not to think about onResume(). Thank you, I'll vote your comment as solution after the restriction on this site allows me to. use this inside your onResume() method. OnResume calls every time when you interact with your activity. Thanks, It was silly of me to forget about the onResume() method.
common-pile/stackexchange_filtered
grouping data from two dataset i've got a problem with dataset: I've got two dataset from two different server but they have the same columns. so it's like that: First DataSet : asset description make jobtype jan feb ... dec 0001 mine ik Acc 0 0 10 0002 yours ic Over 0 0 10 Second dataset : asset description make jobtype jan feb ... dec 0001 mine ik Acc 10 0 10 0002 yours ic Gen 0 0 0 But i would like to merge the 2 dataset into one like that: asset description make jobtype lhjan imjan lhfeb lhfeb ... lhdec imdec 0001 mine ik Acc 0 10 0 0 10 10 0002 yours ic Over 0 0 0 0 10 0 so all the data from one is combine with the second with the same asset and the same jobtype. I try linq but i can't arrive to make it like i want. I'm working on vb.net framework 3.5. could you help me? Julien If you literally just want to join the two, you can use the same DATASET in 2 different Adaptor.fill(). If you do not use a NEW DATASET before the second FILL, the 2 will get merged. i just want to make sure that data sources 1 and 2 are actually data tables. this is how my answer will describe this. ill also be describing it in C#, though the syntax doesn't differ much. you need to alternate your selections for the intermingled fields. you'll also need to adjust your datatypes as necessary from dt1 in ds["datatable1"].AsEnumerable() join dt2 in ds["datatable2"].AsEnumerable() on new { asset = dt1.Field<string>("asset"), jobtype = dt1.Field<string>("jobtype") } equals new { asset = dt2.Field<string>("asset"), jobtype = dt2.Field<string>("jobtype") } select new { asset = dt1.Field<string>("asset"), description = dt1.Field<string>("description"), make = dt1.Field<string>("make"), ... lhjan = dt1.Field<int>("jan"), imjan = dt2.Field<int>("jan"), lhfeb = dt1.Field<int>("feb"), imfeb = dt2.Field<int>("feb"), .... }; here is the approximate vb syntax: Dim query = _ From dt1 In dataset.Tables["datatable1"].AsEnumerable() _ Join dt2 In dataset.Tables["datatable2"].AsEnumerable()_ On new { asset = dt1.Field(Of String)("asset"), jobtype = dt1.Field(Of String)("jobtype") } _ Equals new { asset = dt2.Field(Of String)("asset"), jobtype = dt2.Field(Of String)("jobtype") } _ Select New With _ { _ // see members from c# example _ } I will test it tomorrow but i think i've alreary try that. I will give my feedback tomorrow Thanks you might need a .Distinct() at the end depending on the uniqueness of asset/jobtype combinations. just enclose the query expression in parentheses and call .Distinct() on it: var query = (from ... join ... select).Distinct()
common-pile/stackexchange_filtered
H2 database login issues I am using a local file based H2 Hibernate database with Grails. I have two separate file type dBs setup in dataSource.groovy: dataSource { logSql = false pooled = true dbCreate = "update" // one of 'create', 'create-drop', 'update', 'validate', '' url = "jdbc:h2:devDb;MVCC=TRUE;LOCK_TIMEOUT=10000" } and dataSource_publish { logSql = false pooled = true dbCreate = "update" // one of 'create', 'create-drop', 'update', 'validate', '' url = "jdbc:h2:/dbmak/devDb;MVCC=TRUE;LOCK_TIMEOUT=10000;AUTO_SERVER=TRUE;DB_CLOSE_ON_EXIT=FALSE" } They both produce files in the project root directory: devDb.h2.db and dbmak.h2.db The last thing I did in the dB was to modify the 'sa' password - preferring to set it to a non-null value. I did this by logging into the dbconsole via user 'sa' and then using the command: set password 'newPassword' Which seemed to work fine. However, when I now try and restart the application in the GGTS I get the error: org.h2.jdbc.JdbcSQLException: Wrong user name or password [28000-173] I've tried the original null password as well as other login/password combinations that have worked in the past but still get the above error. One other thing I've done is to replace the h2.db files with recent backed-up copies but still no success. I was wondering if the hibernate system data, containg the 'sa' password, resides elsewhere rather than these individual application databases. If I have a copy of the h2.db files when the system worked I can simply replace it with the modified one that is now failing. One more thing - when I compare the size of the current (failing - which is 230kb in size) debdb.h2.db file with the backed-up one (which is 1252kb in size) I notice that they are significantly different in size, and when I try and restart the application with the backed-up copy of the db file. After the application fails to start the size goes back to failed size of 230kb. I am still unable to login to this dB. Perhaps someone can give me further information how this hibernate db is structured especially in terms of where it retains system user data. As stated above the two application dBs are situated in the root directory of the grails application and I imagine that the actual hibernate system data controls access to both individual application dBs in the hibernate dB structure is going to be in a separate file. If I could locate this I might be able to resolve this problem? Alternatively, if there is a web link to this? I have resolved this login issue - as I have two separate Dbs and having set the sa both passwords to a non-null value I have to explicitly define the password for each separate Db. So, in the DataSource.groovy I now have two sets of login/ password definitions: dataSource { username = "sa" password = "value01" } dataSource_publish { username = "sa" password = "value02" } Without this I believe the login & password is automatically set to 'sa' and "" (i.e: null). And so without the dataSource_publish block on startup the app was attempting, unsuccessfully, to login into the dataSource_publish dB with a null password. Once you see it it's obvious.
common-pile/stackexchange_filtered
AngularJS : Render child li when parent li is selected I have a requirement where i need to display only the child of the selected li. Ex. In the image shown below, EIS_ASCP_FORECAST_SETS_V is checked, so it's child FND_USER and MRP_FORECAST_DESIGNATORS_V is shown and rest of the elements are hidden. Similarly, when FND_USER is selected, it's child must be shown. The tree is created using jquery and is dynamic. I'm using AngularJS for rest of the page. I tried using $(event.target).parent().find('li') but was not able to get the child element out of it. Code for forming the tree <li class="treeView"> <a ng-click="getViewColumns('EIS_ASCP_FORECAST_SETS_V', '724')" href=""><b>EIS_ASCP_FORECAST_SETS_V</b></a>&nbsp;<input type="checkbox" ng-checked="true" ng-click="useInSelect('EIS_ASCP_FORECAST_SETS_V')"> <ul> <li><a ng-click="getViewColumns('FND_USER', '724')" href="">FND_USER</a>&nbsp;<input type="checkbox" ng-click="useInSelect('FND_USER')"> <li><a ng-click="getViewColumns('MRP_FORECAST_DESIGNATORS_V', '724')" href="">MRP_FORECAST_DESIGNATORS_V</a>&nbsp;<input type="checkbox" ng-click="useInSelect('MRP_FORECAST_DESIGNATORS_V')"></a> </ul> Show some efforts what you have tried so far. @SureshKamrushi i didn't understand how to start. @Sukesh, just see about ng-repeat and ng-if directive We can help you to dubug. We can not write complete code for you. @Grundy i didn't use AngularJS to write the tree code @Sukesh, so, why you add angularjs tag? can you provide sample code that you already use? Why just not use angularjs for tree? :-) @Sukesh, add code to your post instead of comment @Grundy recently i added checkbox to the tree @Sukesh, can you provide working plunkr with your code? now not clear what you mean when say dynamic tree. @Grundy the solution what Nishi provided works. I also removed the function call like you suggested. you can add ng-if="showHideComp(parentId)" to child element. and inside showHideComp() function check the isSelected property of parentId and return true/false accordingly. example: <input type="checkbox" ng-model="checked" ng-init="checked=true" /></label><br/> Show when checked: <span ng-if="checked" class="animate-if"> This is removed when the checkbox is unchecked. </span> using function in view not good for angular, because it call on every digest loop, why not just check isSelected property of parentId without function? @Nishi thanks for the suggestion, will check and let you know @Nishi your solution works great. But how do i modify it for the tree which is generated dynamically? I mean i can't use ng-model="checked" for all the list elements right? generate dynamic id and check with that in ng-if @Sukesh: you can try with ng-model also, because for all dynamic fields model will be separate.
common-pile/stackexchange_filtered
Connecting with Facebook API in Power BI Query window I have problems with connecting with the Facebook API in Power BI. Problem occurs when I try to aggregate the number of likes from a table -Count of id. Well it works perfectly with pages with a low number of likes, like a local blog and my personal page but when I try to do that (count) with a page with a lot of likes the aggregation process never finishes. Once I let it run for 20 minutes. I tried to limit the number of posts to a very low number, such as 40, and 10 but it still could not finish the aggregation. Please help me finding a solution! Places where I looked for the answer: https://www.linkedin.com/pulse/20140616113222-14085524-using-power-query-to-tell-your-story-form-your-facebook-data I would try adding a Transform / Count Rows step, instead of your Aggregation.
common-pile/stackexchange_filtered
PenAlignment.Outset has no effect I'm drawing a rectangle with Pen.Alignment = PenAlignment.Outset onto a PictureBox (which has a green background). This is the code: Private Sub PictureBox1_Paint(sender As Object, e As PaintEventArgs) Handles PictureBox1.Paint Dim iTop As Integer = 1 Dim iLeft As Integer = 1 Dim iRight As Integer = 3 Dim iBottom As Integer = 3 Dim r As Rectangle = Rectangle.FromLTRB(iLeft, iTop, iRight, iBottom) Using nPen As Pen = New Pen(Color.Black) nPen.Alignment = PenAlignment.Outset e.Graphics.PageUnit = GraphicsUnit.Pixel nPen.Width = 1 e.Graphics.DrawRectangle(nPen, r) End Using End Sub However, the PenAlignment doesn't have any effect. This is the output: Instead of drawing the rectangle with "outset", it draws exactely where the rectangle is. I expect the black lines to be drawn just around the rectangle. So the first black point should be at 0-0, not at 1-1. What might be going wrong here? I don't see any mistake in the code. Well, that's strange. :-/ I believe the Pen.Alignments are used when the width of the Pen is greater than 1. If you want to draw a bigger rectangle, you will have to resize it yourself. I don't see that mentioned in the MSDN. Are you sure? I had the same problem and got no answers so I had to work around it. Here's what I did: Private Sub Form1_Paint(sender As Object, e As PaintEventArgs) Handles Me.Paint 'Main Shape Values Dim x = 100 Dim y = 100 Dim wdth = 300 Dim hght = 200 Dim shP = 2 'Main Shape Dim ShapePen As New Pen(Color.Black, shP) Dim Shape As New Rectangle(x, y, wdth, hght) e.Graphics.DrawRectangle(ShapePen, Shape) 'Values For the Alignment Dim P_Thck = 20 Dim R_ThickL = (P_Thck / 2) + (shP / 2) Dim R_ThickS = R_ThickL * 2 Dim AlignmentPen As New Pen(Color.Red, P_Thck) 'Dim Align_Outset As New Rectangle(x - R_ThickL, y - R_ThickL, wdth + R_ThickS, hght + R_ThickS) Dim Align_Inset As New Rectangle(x + R_ThickL, y + R_ThickL, wdth - R_ThickS, hght - R_ThickS) 'e.Graphics.DrawRectangle(AlignmentPen, Align_Outset) e.Graphics.DrawRectangle(AlignmentPen, Align_Inset) End Sub I use two shapes. One is my working shape and the other is the one I adjust for my alignment graphics. Inset Outset
common-pile/stackexchange_filtered
Update list item in root site when other list's item is updated in subsite I have a "main" list in the root site of site collection and multiple lists ("child") in subsites. When I update (create/update/delete) item in a subsite list I would like to update a item in root site list. I though about event receiver on child list but I wondering if there is a better solution to achieve it? Any ideas? any specific reason you need to do this? seems like you want to create kind of aggregated list at root, which probably can be used for reporting? Butanyway in order to achieve this it is better to use event receiver. Yes, I need aggregation list for reporting For aggregation other approach you can do is, export all your list from different site into Excel (this can be any site from different web app as well), this will create connection within excel. Now have a separate excel which combines the data from earlier excel into this using query. Now whenever you want to generate report just refresh this 2 excel file in order:) this is completely OOTB no coding all. Thank you for your answer. I think I'll use event receiver Create a custom item event receiver, or a custom workflow is the way to go. If all child lists can make use of a common content type, then I'd develop the event receiver / workflow to work against this content type
common-pile/stackexchange_filtered
Concatenating char arrays in C++ (tricky) So I've got an exercise in which I have to concatenate 2 char arrays as follows: const int MAXI=100; char group[MAXI+7]="things/" char input[MAXI]; cin >> input; //Doing something here! cout << group << endl; Have to make smthing happen so it returns -- things/input_text -- The tricky part is that I am not allowed to use pointers, string library or any kind of dynamic arrays. What to do? EDIT: I dont need to print it, I need the variable to has the value: things/input_text, as I'm going to use for something else! EDIT2: I can't use the < string > library, which means, I can't use strcat() or anything on that library. I'm provided another module which is triggered as follows: void thing(group, stuff, more_stuff); That's it. print one then the other? You want it in C or in C++? C has strcat & strncpy etc... to be used with extreme care. C++ gives you std::string returns or prints? there's a big difference @luser droog can't, becouse i need the variable to be used in an external thing, it doesn't require to be used as a print, but to use the variable as a tool for something else in the program. Write one function that represents the string. char str(int x) { if (x<7) return group[x]; else return input[x-7]; } @basile Cant add the library so I can't use strcat. "The tricky part" isn't well described or motivated. ... Why? And precisely what can you not use? And why? Uhm... I would hope that this is for homework and not for actual production code. @NikBougalis obviously! I'm just learning! Something like this? #include <iostream> using namespace std; const int MAXI=100; int main() { char group[MAXI+7]="things/"; char input[MAXI]; cin >> input; for(int i=0; i<MAXI; i++) { group[7+i]=input[i]; if(input[i]=='\0') break;//if the string is shorter than MAXI } cout << group << endl; } ok, yeah, I'm totally useless, this is like so easy yet I was bound to find a hidden way to do it. Thank you soooo much :) Your code, as is, may leave group without a NULL terminator, and that would be bad. You should also highlight the danger of a buffer overflow in the cin >> input statement using a very bright marker to help the OP learn how to spot potential issues early. #include <iostream> using namespace std; int main(int argc, char **argv) { const int MAXI=100; char group[MAXI+7]="things/"; char input[MAXI]; // Warning: this could potentially overflow the buffer // if the user types a string longer than MAXI - 1 // characters cin >> input; // Be careful - group only contains MAXI plus 7 characters // and 7 of those are already taken. Which means we can only // write up to MAXI-1 characters, including the NULL terminator. for(int i = 0; (i < MAXI - 1) && (input[i] != 0); i++) { group[i + 7] = input[i]; group[i + 8] = 0; // ensure null termination, always } cout << group << endl; return 0; } You can avoid the repeated group[i+8] = 0 if you are so inclined, by taking the i out of loop scope. That's left as an exercise to the OP. I luv you both <(^.^)>
common-pile/stackexchange_filtered
Can horizontally and vertically polarized light combine to become circularly/elliptically polarized light? Well, we know that circularly/elliptically polarized light is made up from orthogonal components. So is it possible then to create circularly/elliptically polarized light by combining horizontally and vertically polarized light? It seems to make perfect sense to me. EDIT: Clarification: I mean by creating circularly/elliptically polarized light in the laboratory (though beam-splitters ?, I don't know) by combining different light beams of horizontally/vertically polarized light. Yes, it can. Clarification: Are you asking whether you can express a circular polarized wave in a basis of linearly polarized components? Because that's certainly true. Yes, this is possible. The device that makes this possible is called a polarizing beam splitter, which will transmit or reflect light according to its polarization. Thus, it will split diagonal or circular light into its horizontal and vertical components, and when used in reverse it will undo the process (it has to). Note, however, that you will in general require a pretty exceptional interferometric stability to achieve this. You certainly require both beams to originate from the same source so that they have a definite phase relationship to each other; you would split the beam in two, polarize it in different directions, add a delay stage to control the relative phase, and then combine them using a polarizing beam splitter. The thing is, though, that you need the relative delay to be very tightly controlled, as a few tens of nanometers of difference in the path length will change the polarization from diagonal to circular. This is essentially doable but it is and fiddly, and requires very careful alignment - but that is optical physics in a nutshell.
common-pile/stackexchange_filtered
How to use EndInvoke when events/delegates called are not your responsibility I currently have a class that receives information constantly from an API. When its received that information it fires an event/delegate which other classes can subscribe to. Because I don't want to block the thread the class is running on I use delegate.BeginInvoke to fire the event. eg. object UpdateLock=new object(); List<Action<object, UpdateArgs>> UpdateSubscribersList = new List<Action<object, UpdateArgs>>(); public void UpdateSubscribe(Action<object, UpdateArgs> action) { lock (UpdateLock) { UpdateSubscribersList.Add(action); } } public bool UpdateUnSubscribe(Action<object, UpdatePortfolioArgs> action) { lock (UpdateLock) { return UpdateSubscribersList.Remove(action); } } public virtual void onUpdate(UpdateArgs args) { lock (UpdateLock) { foreach (Action<object, UpdateArgs> action in UpdateSubscribersList) { action.BeginInvoke(args, null, null); } } } This is just an example. Don't worry about the use of the list rather than a multicast delegate - there's some other code which caused me to use that. Anyway I have two questions: 1) where should I put EndInvoke so it doesn't block the thread? I believe this question has already been asked, but I'm not so clear on it, and anyway it's not my main question. 2) The main purpose of EndInvoke is to clean up the thread and handle exceptions. Now cleaning up the thread is fine, but how can I handle exceptions? I have no idea which code will be called by these delegates, and that code is not my responsibility. So is there anyway I can get the subscriber to these events to have to deal with the clean up code and the exceptions, rather than my class? Or is their anyway that he can access anything returned by EndInvoke at all, so that if he wants to he can deal with any exceptions? Not really a solution to your question but why don't you use actual events? well to use begininvoke on an event you have to get the list of delegates from it anyway, and since I needed a wrapper method for the subscribing/unsubscribing for other reasons, it ended up simpler just using delegates. I added exception-handling to my answer. I think this pattern should work for you. public void StartAsync(Action a) { a.BeginInvoke(CallBack, a); // Pass a Callback and the action itself as state // or a.BeginInvoke(Callback, null); then see alternative in Callback } private void CallBack(IAsyncResult ar) { // In the callback you can get the delegate Action a = ar.AsyncState as Action; // or // Action a = ((AsyncCallback)ar).AsyncDelegate as Action try { // and call EndInvoke on it. a?.EndInvoke(ar); // Exceptions will be re-thrown when calling EndInvoke } catch( Exception ex ) { Log.Error( ex, "Exception in onUpdate-Action!" ); } } You should be able to adapt this to your Action<object, UpdateArgs> delegate. I used StartAsync just for brevity. This would be inside the for loop of your onUpdate Method, of course. Mind that Callback will be called on the Async-Thread! If you want to update GUI elements here, you'll need to marshal that back to the GUI-Thread. If you add an event to your class public event EventHandler<Exception> OnError; you can publish those exceptions : catch( Exception ex ) { Log.Error( ex, "Exception in onUpdate-Action!" ); OnError?.Invoke( a, ex ); } Thanks! This is for the first question right, not the second? Yes. The part about exceptions - well, I'd have to look that one up, too. But maybe this will get you to a point where you can find it out yourself? By the way, I've also seen it like this: a.BeginInvoke(a.EndInvoke, null); But if anything is going strange, this will be hard to debug. And maybe you want to do some other stuff in the callback as well ... I suppose I could create a public dictionary/lookup of errors using the subscribers as a key somehow, so that at least if errors do occur they can be found by the subscribers, although admittedly it loses a lot of the advantages of exceptions. Well you could add an OnError event with the subscriber as "sender" and the Exception in the EventArgs. And that would automatically run asynchronously since its on the same thread as the invoked delegate. If I am not mistaken, yes. Thanks! I'll see what's practical to implement and see whether its worth the effort.
common-pile/stackexchange_filtered
I want to create a complex slice in Power BI that will not display the values selected in another slice I have two slices on a page with the names of cities, the input data in both slices are the same. It is necessary that there was an opportunity to select an object in the first slice, go to the second slice and select from the list of objects that do not take into account the object selected in the first slice. I tried to create a copy of the table from which the data for the slices is taken and leave only the column with the data I need. Then use this column to fill in 2 slices, but an error appears in the rest of the charts. I also tried to write a script in python. Presented a list of data in the form of a dictionary. I created a loop that deleted the data selected in 1 slice and presented the result I needed in the second slice. BUT. If you try to repeat the operation and select another number from 1 slice , it does not restore the previous view of the slice and it turns out ... https://wampi.ru/image/Ryw97yQ I will be very grateful for the support provided
common-pile/stackexchange_filtered
How do i make the border full color (gray) instead of (half/half) css/html? i am a newbie that tries to learn HTML + CSS I would like this border to be in full gray color and as we can see its half / half - i do not understand why, thanks enter image description here my console and image below: reset border-style to solid , buttons have an outset style set by default, that's why it seems a lighter and darker color , this is to simulate sort of a shadow for a raising effect of the button. Use an universal property border .download-button { height: 50px; width: 140px; border-radius: 7px; border: 2px solid grey; color: grey; font-weight: bold; font-size: 20px; cursor: pointer; } <button class="download-button">Download</button> Thank you Vladislav, it works, have a nice day ! It might inherit from parent div, please add !important; to force apply css. Otherwide you can using development tools to check it (Tab Styles and Computed) Thank you for your answer, i am using visual studio code and my "window of program" looks different, maybe im just too new to understand your answer, but thank you sir, have a nice day.
common-pile/stackexchange_filtered
making a python function input a list I am coding for a function and want to turn this input, [], [], ["pepperoni", "pepperoni"], wings=[10, 20], drinks=["small"] in a function cost_calculator The problem is it is not letting me take it since there is the word wings and drinks in it. Trying to convert it into a list gives me wrong syntax. I also cannot take out the words wings and drinks form the input. my code: def cost_calculator(x): cost=0 drinks={"small":2.00,"medium":3.00,"large":3.50,"tub":3.75} wings={10:5.00,20:9.00,40:17.50,100:48.00} toppings={"pepperoni":1.00,"mushroom":0.50,"olive":0.50,"anchovy":2.00, "ham":1.50} pizza={[]:13.00} for i in x: if i in pizza: cost=cost+pizza[i] elif i in toppings: cost=cost+toppings[i] elif i in drinks: cost=cost+drinks[i] elif i in wings: cost=cost+wings[i] else: break return cost when inputing cost_calculator([], [], ["pepperoni", "pepperoni"], wings=[10, 20], drinks=["small"]) it gives me cost_calculator() got an unexpected keyword argument 'wings' i need to use [] for showing the value of a pizza which should make cost=cost+13.00 how can i overcome this Why not upload images of code on SO when asking a question? Hi, please copy&paste the code from the screenshot into your question. Not totally sure what the arguments to the function are, but if you want to use keyword args, use the ** syntax: So, I think I understand what the args can be (in this order): Any number of [], each representing one pizza One list of toppings (apparently this is optional) Optional named lists (wings, drinks) So we can use variable number of args and keyword args: # Tuple of var args and a dictionary of keyword args def cost_calculator(*args, **kwargs): drinks = {"small": 2.00, "medium": 3.00, "large": 3.50, "tub": 3.75} wings = {10: 5.00, 20: 9.00, 40: 17.50, 100: 48.00} toppings = {"pepperoni": 1.00, "mushroom": 0.50, "olive": 0.50, "anchovy": 2.00, "ham": 1.50} cost = 0 coupon_value = 0.0 # args is a tuple of unnamed parameters for a in args: # [] has 0 length so it's a pizza if 0 == len(a): cost += 13.00 # Otherwise it must be toppings else: cost += sum([toppings[x] for x in a if x in toppings]) # Step through the keyword args for key,value in kwargs.items(): if key == "drinks": cost += sum([drinks[x] for x in value if x in drinks]) elif key == "wings": cost += sum([wings[x] for x in value if x in wings]) elif key == "coupon": coupon_value = value else: break return cost * (1.0 - coupon_value) c = cost_calculator([], [], ["pepperoni", "pepperoni"], wings=[10,10], drinks=["small"], coupon=0.1) print(c) the thing is that i have to use an empty set to signify pizza and toppings should just be in the set so the input should be something like this: [],[], ["pepperoni","olives"], wings=[20],drinks=["tub"] where [] should tell to add 13 to the cost while for the others you will have to iterate it through the list. this works with multiple amount of items but when you use cost_calculator([]) it outputs zero @AnveshSunkara Updated again. In the future you should try to explain the problem in more detail. what if there was another input coupon where the input is [],[], ["pepperoni","olives"], wings=[20],drinks=["tub"] ,coupon =0.1....then the last thing should be s=1-coupon and cost = cost * s but this doesn't iterate over floats. @AnveshSunkara Updated for coupon. yeah but this doesn't seem to work when the input is just a tub sized drink and a coupon of 0.1
common-pile/stackexchange_filtered
Resetting NSMutableArray What is the best a quickest way to reset an NSMutableArray? -[NSMutableArray removeAllObjects] doesn't work for you? Stupidly it appears to actually release the object... What is the point in that? Anyway I got round it by putting a [NSMutableArray retain] just before the removeAllObjects. Joe - in that case, your code is broken. See http://developer.apple.com/mac/library/documentation/cocoa/conceptual/MemoryMgmt/Articles/mmObjectOwnership.html its strange that no one has mentioned about just releasing the NSMutableArray and creating a new one. I am really curious as to, given a large number of objects, which one will be faster. removeAllObjects removeAllObjects if assuming by 'reset', you mean you just want to empty the array. If you are attempting to do what I think you are attempting to do, which is to keep an array empty but not release it, or at least to make it available next time it is needed then firstly you need to set a variable or a property within your class for this variable: NSMutableArray *mutableArray; Next add this code before the position at which you will need the empty array: if (!mutableArray) { mutableArray = [[NSMutableArray alloc] init]; } Now you can safely call [mutableArray removeAllObjects]; without fear that the array will become unavailable once empty.
common-pile/stackexchange_filtered
WebApi - The request contains an entity body but no Content-Type header I'm trying to accept application/x-www-form-urlencoded data on my webApi endpoint. When I send a request with PostMan that has this Content-Type header explicitly set, I get an error: The request contains an entity body but no Content-Type header My Controller: [HttpPost] [Route("api/sms")] [AllowAnonymous] public HttpResponseMessage Subscribe([FromBody]string Body) { // ideally would have access to both properties, but starting with one for now try { var messages =<EMAIL_ADDRESS>Body); return Request.CreateResponse(HttpStatusCode.OK, messages); } catch (Exception e) { return Request.CreateResponse(HttpStatusCode.InternalServerError, e); } } The POSTMAN cap: What am I doing wrong? If you look at the request message, you can see that Content-Type header is being sent like this. Content-Type: application/x-www-form-urlencoded, application/x-www-form-urlencoded So, you are adding the Content-Type header manually and POSTMAN is adding that as well, since you have selected the x-www-form-urlencoded tab. If you remove the header you have added, it should work. I mean you will not get an error but then binding will not work because of the simple type parameter [FromBody]string Body. You will need to have the action method like this. public HttpResponseMessage Subscribe(MyClass param) { // Access param.Body here } public class MyClass { public string Body { get; set; } } Instead, if you insist on binding to string Body, do not choose the x-www-form-urlencoded tab. Instead choose the raw tab and send the body of =Test. Of course, in this case, you have to manually add the `Content-Type: application/x-www-form-urlencoded' header. Then, the value in the body (Test) will be correctly bound to the parameter. You're like the batman of WebAPI Badri. Thanks for saving my day again.
common-pile/stackexchange_filtered
How to read this string into jsoncpp 's Json::Value I have such a json string : {"status":0,"bridge_id":"bridge.1","b_party":"85267191234","ref_id":"20180104151432001_0","function":{"operator_profile":{"operator":"aaa.bbb"},"subscriber_profile":{"is_allowed":true,"type":8},"name":"ServiceAuthen.Ack"},"node_id":"aaa.bbb.collector.1"} how can I read it into jsoncpp lib 's Json::Value object ? I found such code by searching stackoverflow : std::string strJson = "{\"mykey\" : \"myvalue\"}"; // need escape the quotes Json::Value root; Json::Reader reader; bool parsingSuccessful = reader.parse( strJson.c_str(), root ); //parse process if ( !parsingSuccessful ) { std::cout << "Failed to parse" << reader.getFormattedErrorMessages(); return 0; } std::cout << root.get("mykey", "A Default Value if not exists" ).asString() << std::endl; return 0; but how to convert my string to this form ? {\"mykey\" : \"myvalue\"} thank you for any help . You don't. The slash characters are escape characters used to represent a " in C++ source code (without them the " would mean "This is the end of this C++ string literal"). The JSON (which isn't C++ source code) should not have the escape characters in it.
common-pile/stackexchange_filtered
Displaying gazebo realsense image using ROS, depth image looks weird and having different size from the raw Trying to display images captured by realsense in my gazebo simulation onto an opencv window. But the depth image is showing colorful despite rviz showing black and white. And the raw and depth image from the same cam have different size despite not resizing. I want the simulation to have the same output as the real scene realsense cam do. How can I fix it? Down below is my image displaying python codes and the launch file and the picture of the output images. Just in case here's the git: https://github.com/brian2lee/forklift_test/tree/main The realsense d435 add-on used in gazebo: https://github.com/issaiass/realsense2_description https://github.com/issaiass/realsense_gazebo_plugin Edit: The colored depth map has been solved by @Christoph Rackwitz, updated the code, now showing normal depth map but the size problem remains. images (from top left, 1.opencv raw 2. opencv depth 3. rviz depth): im_show.py: #!/usr/bin/env python3 import rospy import cv2 from sensor_msgs.msg import Image from cv_bridge import CvBridge, CvBridgeError class ImageConverter: def __init__(self): self.bridge = CvBridge() self.image_sub = rospy.Subscriber("/camera/color/image_raw", Image, self.callback) def callback(self, data): try: # Convert the ROS Image message to a CV2 image cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8") except CvBridgeError as e: print(e) return # Display the image in an OpenCV window cv2.imshow("Camera Image", cv_image) cv2.waitKey(3) def main(): rospy.init_node('image_converter', anonymous=True) ic = ImageConverter() try: rospy.spin() except KeyboardInterrupt: print("Shutting down") cv2.destroyAllWindows() if __name__ == '__main__': main() img_show_depth.py: #!/usr/bin/env python3 import rospy import cv2 import numpy as np from sensor_msgs.msg import Image from cv_bridge import CvBridge, CvBridgeError class DepthImageConverter: def __init__(self): self.bridge = CvBridge() self.image_sub = rospy.Subscriber("/camera/depth/image_raw", Image, self.callback) def callback(self, data): try: # Convert the ROS Image message to a CV2 depth image cv_image = self.bridge.imgmsg_to_cv2(data, desired_encoding="passthrough") except CvBridgeError as e: print(e) return # Normalize the depth image to fall within 0-255 and convert it to uint8 cv_image_norm = cv2.normalize(cv_image, None, 0, 255, cv2.NORM_MINMAX) depth_map = cv_image_norm.astype(np.uint8) # Display the depth image in an OpenCV window cv2.imshow("Depth Image", depth_map) cv2.waitKey(3) def main(): rospy.init_node('depth_image_converter', anonymous=True) dic = DepthImageConverter() try: rospy.spin() except KeyboardInterrupt: print("Shutting down") cv2.destroyAllWindows() if __name__ == '__main__': main() gazebo.launch: <?xml version="1.0"?> <launch> <param name="robot_description" command="xacro '$(find forklift)/urdf/forklift.urdf.xacro'"/> <param name="pallet_obj" command="xacro '$(find pallet)/urdf/pallet.urdf.xacro'"/> <node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher"/> <node name="joint_state_publisher_gui" pkg="joint_state_publisher_gui" type="joint_state_publisher_gui"/> <include file="$(find gazebo_ros)/launch/empty_world.launch"> <arg name="world_name" value="$(find env_world)/world/test.world"/> <arg name="paused" value="false"/> <arg name="use_sim_time" value="true"/> <arg name="gui" value="true"/> <arg name="headless" value="false"/> <arg name="debug" value="false"/> </include> <node name="spawning_forklift" pkg="gazebo_ros" type="spawn_model" args="-urdf -model forklift -param robot_description -z 0.160253"/> <node name="spawning_pallet" pkg="gazebo_ros" type="spawn_model" args="-urdf -model pallet -param pallet_obj -x 5 -z 0.001500 "/> <node name="rviz" pkg="rviz" type="rviz" args="-d $(find forklift)/rviz/cam.rviz" required="true" /> <!-- <? default rviz ?> <node name="rviz" pkg="rviz" type="rviz" args="-d $(find realsense2_description)/rviz/urdf.rviz" required="true" /> --> <node name="img" pkg="img" type="img_show.py" output="screen" args="$(find img)/src/img_show.py"/> <node name="img_depth" pkg="img" type="img_show_depth.py" output="screen" args="$(find img)/src/img_show_depth.py"/> <!-- <node name="img_both" pkg="img" type="img_show_both.py" output="screen" args="$(find img)/src/img_show_both.py"/> --> </launch> Have you tried with real camera input? It's a stereo camera so in a lot of ways I'd actually expect this to be the expected functionality. yeah ur right, according to https://www.intelrealsense.com/compare-depth-cameras/ , depth and raw have different resolution. Wasted so much time cuz not realizing the "wrong" output is actually the correct one. The colorization happens because you call applyColorMap(). Your code shows no resizing of the images, no setup to make the windows resizable, and no fixed sizes going into the creation of any images. From what you present so far, I cannot nobody can say what's going on there. You should look into your ROS and gazebo parts. that fixed the colored depth map problem, that's an oopsie on me. the remaining issues (sizes) are in parts of your setup that you haven't presented. I'd bet it's something in gazebo. it draws roughly the same views, but with differing aspect ratio. Any suggestion to find where the problem might be? Have been looking through codes with no results. Since I didn't resize the window in my code, I fear that the output of the cam is in different size too, and this will make lots of issues in the future work. I'm here because of the OpenCV tag, not the gazebo or ROS tags. the issue does not lie in anything related to OpenCV, as far as you've shown. your issue is gazebo/ROS.
common-pile/stackexchange_filtered
Creating DbContext outside of Asp.Net core project I'm fairly new to Dependency Injection, ASP.NET Core and EF Core, so someone please point me to the right direction. Currently, I have the following codes under the startup.cs: services.AddDbContext<MyDbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"))); This works fine if everything is within one project (let's say, SomeProject.WebApp). However, I would like to have MyDbContext as well as any data operation in a separated project, let's call it SomeProject.Data. So far, I have no trouble creating my DbContext class: public class MyDbContext : DbContext { public MyDbContext(DbContextOptions<MyDbContext> options) : base(options) { } } However, within SomeProject.Data, whenever I would like to do any data operation: using (var db = new MyDbContext(How do I fill in here?)) I think it is looking for some kind of connection string, however the connection string is located inside the appsettings.json which is in the SomeProject.Webapp. DbContext has a parameterless constructor you can also use. public MyDbContext() : base(options) is not valid, it is still looking for the DbContextOptions at the options part Try public MyDbContext() : base() instead.
common-pile/stackexchange_filtered
How to create and login external users authenticated against a third-party Oauth2 server? I'm working on an SSO implementation where I have to give access to a Drupal 7 site to external users authorized/authenticated by Oauth2. I'm struggling with the process of taking the supplied user information from the Oauth2 server and creating a drupal user from that. As I understand the application flow Using an authorization grant flow, users are redirected to external oauth2 server for authentication, server responds with authorization code and scope Drupal 7 site takes authorization code and forms post for authorization token. Auth token is used to get user info. User info is processed from JSON, formatted to array, and then fed into user_external_login_register, success! Looking at the documentation the user_external_login_register page it looks like the user object needs to be provided a password. What should that be since in an Oauth2 scenario the D7 client site shouldn't have access to the user's password? It also looks like the two examples assume that some of the credentials are being supplied and processed by the user login form. In my case all of the credentials are being supplied by an API and no drupal form is being submitted. How do I shape this to my use needs? Or, do I need to create a second form that gets submitted programmatically? Finally, the native user base has many required fields to register and required admin approval on registration. I don't need that info for the external users, and the users don't need to be approved, they should just get logged in. In theory, there are two general login flows. The first is a self-initiated authentication against the Oauth2 server; the second the user is redirected and authenticated from the partner organization site. Either way, my logic begins when the auth code get provided to the site.
common-pile/stackexchange_filtered
bind to ngModel through clickable options instead of select-option (dropdown) I am trying to build a basic colorpicker with predefined colors. for that I have an object "colors" with color-values as it's properties: public colors = [ { value: '#ffffff' }, { value: '#919191' }, { value: '#555555' }, // and some more ]; Following some examples on the web, I set up a select-option structure in my html: <select name="role" [(ngModel)]="item.color"> <option *ngFor="let color of colors" [value]="color.value"> <div class="color-box-modal" [style.background]="color.value"></div> </option> </select> This does create a dropdown menu for the options, though the colors inside don't show up. The class color-box-modal has height and width values as I did not intend to have a dropdown, but several colored boxes to click on in order to select. Is there an alternative to the select-option structure which allows me to not have a dropdown, but just several colored boxes? Radio-buttons/checkboxes are not the desirerable way either, as I want to have a clickable field on it's own that reacts to being clicked. If there is no alternative, is it possible to do the ngModel binding on a button-click? edit: After testing option 2 on Osman Ceas answer, I now have this: <ng-template #content let-c="close" let-d="dismiss"> <i class="close icon" (click)="d('Close click x')"></i> <div class="header"> Choose a new color </div> <div class="content"> <label for="col1" class="color-box-modal" style="background-color: #ffffff"> <input (click)="c('#ffffff')" id="col1" type="radio" class="hidden" [(ngModel)]="item.color" [value]="'#ffffff'"> </label> <label for="col2" class="color-box-modal" style="background-color: #ffff00"> <input (click)="c('#ffff00')" id="col2" type="radio" class="hidden" [(ngModel)]="item.color" [value]="'#ffff00'"> </label> <label for="col3" class="color-box-modal" style="background-color: #00ffff"> <input (click)="c('#00ffff')" id="col3" type="radio" class="hidden" [(ngModel)]="item.color" [value]="'#00ffff'"> </label> </div> <div class="actions"> <div class="ui button" (click)="c('Close click cancel')">Cancel</div> </div> </ng-template> Though the ngModel binding does not seem to work. The whole thing opens in a modal (the template), which in itself works, just binding to the ngModel, as I said, does not. Now, this might not help everyone, but it was my solution in the end. I started with a loop, item of items where the template in my question was meant for one single item. I solved, or rather worked around the binding situation by just moving each item to it's own component, somewhat like this: <div *ngFor="let item of items> <app-sub-item [item]="item"></app-sub-item> </div> inside I have severel of these: <label for="col1" class="color-box-modal" style="background-color: #ffffff"> <input (click)="setColor('#ffffff')" id="col1" type="radio" class="hidden"> </label> With the following being in the ts-file: setColor(color: string) { this.item.color = color; } This actually works just fine at the moment. Hope whoever reads this issue can find some use in my solution. A native HTML select will only render text inside and any other tag will be ignored, so that's why your boxes are not showing. If your wrap your radio button or checkbox in a <label> with the attribute for equals to an ID given to the <input>, you can basically click anywhere on the label, lets say some adjacent text, and the click will propagate to the input so the binding will still work. You can create your own custom form controls, check out this article. So you could create a custom color picker form element that will work in any form using template forms or reactive forms. Have a nice day will try option 2 shortly, if that doesn't work I'll head on to 3. Once I have something that works I'll report back with the result, but thanks already for the answers I tested option 2 now, though have some issues with it, going to edit my original question with them are you trying to select one color at the time? If so, the input type should be radio instead of checkbox I did try it with radio as well (and yes, it's one at a time), though the data-binding does not seem to work still can you post how does your form object looks? actually I don't even have a "form" tag, that might be the issue... in any case, I will update my answer again with the full template in which I have the color-picker Nevermind, with some restructuring around the whole html I was actually able to do the value change in the typescript part, adding it as answer (or trying to), +1 for pointing me towards the right way though
common-pile/stackexchange_filtered
Unable to open database file error on using subsequent queries I have the following code, the first cursor object works fine, but when i do another query and assign it to the flightCursor, it gives the error. Cursor cursor = database.query( CityAndAirportsTable.notificationsTable, new String[] { CityAndAirportsTable.notifyFlightId }, null, null, null, null, "Id DESC" ); cursor.moveToFirst(); while( !cursor.isAfterLast() ){ String id = String.valueOf( cursor.getInt( 0 ) ); Cursor flightCursor = database.query( CityAndAirportsTable.flightTable, new String[] { CityAndAirportsTable.fromDestinationCode, CityAndAirportsTable.toDestinationCode, CityAndAirportsTable.currentPrice }, CityAndAirportsTable.flightId + "=" + id, null, null, null, null ); } Here in the flightCursor = database.query, i get the error. Logs 03-27 23:49:09.628: E/SQLiteLog(2296): (14) cannot open file at line 30046 of [9491ba7d73] 03-27 23:49:09.628: E/SQLiteLog(2296): (14) os_unix.c:30046: (24) open(/data/data/com.flightapp.myapp/databases/Application_DB-journal) - 03-27 23:49:09.628: E/SQLiteLog(2296): (14) cannot open file at line 30046 of [9491ba7d73] 03-27 23:49:09.628: E/SQLiteLog(2296): (14) os_unix.c:30046: (24) open(/data/data/com.flightapp.myapp/databases/Application_DB-journal) - 03-27 23:49:09.628: E/SQLiteLog(2296): (14) statement aborts at 11: [SELECT From_Destination_Code, To_Destination_Code, Current_Price FROM Flights WHERE Id=2] unable to open database file 03-27 23:49:09.629: E/SQLiteQuery(2296): exception: unable to open database file (code 14); query: SELECT From_Destination_Code, To_Destination_Code, Current_Price FROM Flights WHERE Id=2 There are similar question on stackoverflow, but in my case the first query works but the second one fails. Close your cursor when you're done with it! I think you've just got too many cursor objects open Cursor cursor = database.query( CityAndAirportsTable.notificationsTable, new String[] { CityAndAirportsTable.notifyFlightId }, null, null, null, null, "Id DESC" ); cursor.moveToFirst(); while( !cursor.isAfterLast() ){ String id = String.valueOf( cursor.getInt( 0 ) ); Cursor flightCursor = database.query( CityAndAirportsTable.flightTable, new String[] { CityAndAirportsTable.fromDestinationCode, CityAndAirportsTable.toDestinationCode, CityAndAirportsTable.currentPrice }, CityAndAirportsTable.flightId + "=" + id, null, null, null, null ); /* Close the cursor here! */ flightCursor.close(); /* ---------------------- */ } Hopefully this fixes your issue
common-pile/stackexchange_filtered
Simple sortable list with rails and a :pos column inside a table? Im looking for a simple sortable list solution or method to update the position of max 50 rows. I tried several gems but they all store the id as a big value ( like reorder) or in a seperate table. What would be a solution to update a table with questions where id => x and update the pos column also taking into consideration multiple users at the same time. Or is there a gem for this? I'm not sure about this but would it be possible to have another column on your table and update that column when you want to reorder?
common-pile/stackexchange_filtered
multiple segues to the same view controller I'm really new to coding, so if this is stupid please be gentle :P. What I'm trying to achieve with swift coding is to have 2 view controllers that can pass data based on the sending segue. VC1 is setup with 2 labels and 2 buttons. Label 1 is player 1 name, label 2 is player 2 name. Button 1 is to choose player 1 from a global list of names in VC2 and button 2 is the same for player 2 VC2 has a picker view set up with a list of names and a button to "choose" the name chosen in the pickerView. I know how to set up all the buttons, pickers, and labels and even how to send the data back with a prepareForSegue, however what i don't know how to do is tell the VC2 "choose" button to send it to playerOneSegue or playerTwoSegue based on which button was chosen in VC1. Any help will be appreciated, again I'm sorry if this is stupid but I've been stuck for a while now and haven't been able to find anything online to help. I'm not even sure if it's the way I should be doing this. Half of me wants to just set up an alert for each button to not even jump to the other VC, but the other part wants to figure out how to do this because I'm sure there must be a way lol. I read the question twice, but its not clear what are you trying to achieve, can you please rephrase or elaborate? What are you trying to do with VC2? All VC2 is is a viewController with a PickerView that has a list of names and a button. Hmmmm, not sure how to explain it easily, basically i have 2 players, to choose the name for either of those players i want to be able to go to the same picker VC and when i pick the player and click the button it loads the player name in VC1 label according to if I'm loading player one or player two Then why go to VC2 at all? You can place the pickerView in VC1. I think that's probably the right way to do it, but I wanted to have another screen where i could pull from the picker or have an add button to add new names to the picker If you have two segues to same VC, you can give each segue unique identifier, and distinguish between the senders based on identifiers. Set tag for buttons (say 1 for button1 and 2 form button2) override func prepareForSegue(segue: UIStoryboardSegue!, sender: AnyObject!) { if (segue.identifier == "firstIdentifier") { var VC2 : VC2 = segue.destinationViewController as VC2 VC2.buttonTag = sender.tag } if (segue.identifier == "secondIdentifier") { var VC2 : VC2 = segue.destinationViewController as VC2 VC2.buttonTag = sender.tag } } In VC2, declare a variable buttonTag var buttonTag = 0 and wherever you are performing the segue in VC2, just check for the value of buttonTag if buttonTag == 1 { //segue caused by button1 of VC1 } else if buttonTag == 2 { //segue caused by button2 of VC1 } else { //segue caused by something else } I hope that is what you are trying to achieve Thanks for the quick response. This is exactly how I have it setup right now I believe. What I don't know what to do with is the "choose" button in VC2. Currently I have a "performSegueWithIdentifier("playerOneSegue", sender: self)" somehow i need to tell that button to perform a different segue based on which segue led to the VC opening Not quite sure exactly what you are asking. But it sound like you want to send data from vc1 to vc2 which then decided which segue to run in vc2. What I would do: in vc1 prepareForSegue i would set a property in vc2 which said something about what to do with the result in vc2 based on the button in vc1. in your prepareForSegue: if segue.identifier == "MyVc2Segue" { let yourNextViewController = (segue.destinationViewController as CommentViewController) // Set variable yourNextViewController.myVar = somethingVar } Then in your vc2 I would use a outlet instead of relying on segues, and in the outlet take a stance/decision on which segue you should perform, based on the variable mentioned above.
common-pile/stackexchange_filtered
Subtraction of trigonometric functions I was working on a problem booklet and came across the following equation. $$\sqrt2\sin(2x)-\cos(2x)=\sqrt3\sin(2x-a)$$ $a \in \mathbb{R}$ is a specific value that I'm supposed to find, but I don't see how to make the first part look like the second part in the first place, so I can't even get there What is $a$? Looks like this equation isn't true generally, but you can probably solve for some value of $a$ so that the equation holds. a is a specific value that I'm supposed to find, but I don't see how to make the first part look like the second part in the first place, so I can't even get there. To find the suitable $a$, you will be probably looking at the right-hand side and using the identity $\sin(s-t)=\sin s\cos t-\cos s\sin t$. Where does the $\sqrt{3}$ come from though? The standard way to do this is to write $$ \sqrt2\sin(2x)-\cos(2x)=A\sin2x\cos a-A\cos2x\sin a $$ with $A>0$, because the addition formula says the expression is $$ A\sin2x\cos a-A\cos2x\sin a=A\sin(2x-a). $$ We can choose $$ A\cos a=\sqrt{2},\quad A\sin a=1 $$ that gives $$ A^2=A^2\cos^2a+A^2\sin^2a=2+1=3. $$ Therefore $A=\sqrt{3}$ and so $$ \cos a=\frac{\sqrt{2}}{\sqrt{3}},\quad \sin a=\frac{1}{\sqrt{3}}. $$ Since both sine and cosine are positive, you know that you can take $0<a<\pi/2$, thus $$ a=\arcsin\frac{1}{\sqrt{3}}. $$
common-pile/stackexchange_filtered
How to replace a character with another character in a list of string I have the following list of strings: a = ['1234!zf', '5678!ras', 'abcd!ggt', 'defg!z', 'hijk!', 'lmnk!reom'] I want to replace the 4th character with another character (the character will always be the same in every string, I just don't know it, in this case it's '!'), for example '&'. How can I do that? Resulting list of strings should look like this: a = ['1234&zf', '5678&ras', 'abcd&ggt', 'defg&z', 'hijk&', 'lmnk&reom'] I know that strings in python are immutable and I honestly have no idea how to work with them. I tried all sorts of thins from foreach, using for with iterator, using .join(), using replace(), using list[4:], list[:4] and so many more and I just can't make this simple task work on python. Note: Resulting list must be in list 'a' Is it always a "!" (and there is only one "!") that you want to replace? "I know that strings in python are immutable and I honestly have no idea how to work with them." Did you try reading the documentation, in order to understand what those methods actually do? You can also find out about them at the interactive prompt, using e.g. help(str.replace). "using list[4:]" This is not relevant, because it is slicing the list, not the individual strings in the list. @DanielHao the question is too broad because it involves two separate procedures: doing something with each element of a list, and doing the replacement in a way that actually has an effect (i.e., keeping in mind that .replace will create a new string, doing something useful with that new string). I added a duplicate for each, so that the question can be answered quickly without attracting more attempts that don't help build the site; but questions like this should end up closed and deleted as "needs more focus" instead. Totally. Got the points - after seeing 2nd related links. Appreciate your insight on it. @tdelaney no its not always a "!" and it is only one I want to replcace. @KarlKnechtel I'll check the documentation out, thanks! You can use a for-loop to iterate through every index in array 'a', then assign a new string on each position such that new_string_i = prev_string[:4] + "&" + prev_string[5:] Code snippet: a = ['1234!zf', '5678!ras', 'abcd!ggt', 'defg!z', 'hijk!', 'lmnk!reom'] for i in range(0, len(a)): str = a[i] a[i] = str[:4] + "&" + str[5:] print(a) There are, of course, a lot of ways to approach this problem, but I find this one as being 'the most intuitive' and easy to comprehend. There are multiple ways to do this, but string slicing is a good way to go. In this example, each string is sliced with the new character added at offset 4. Its built into a new list and then a slice encompassing the entire original list is replaced with the new values. >>> a = ['1234!zf', '5678!ras', 'abcd!ggt', 'defg!z', 'hijk!', 'lmnk!reom'] >>> a[:] = [s[:4] + "&" + s[5:] for s in a] >>> a ['1234&zf', '5678&ras', 'abcd&ggt', 'defg&z', 'hijk&', 'lmnk&reom']
common-pile/stackexchange_filtered
How to display records outside of a form I am new to ASP.NET. Here i have a webpage with 2 sections, left and right. I have to fill a form on the left side, and right side i have to display some records comes from database. (See the screenshot.) In my knowledge I need to use a form tag to submit form. Same time I have to display records on the right side by using a gridview. Gridview also required a form tag, How can i use two form tag in a asp.net page or is there any option to display records outside of form ? I am stuck with this issue. I welcomes your help on this problem to learn more in asp.net. Thanks, screenshot Based on your wording, I assume this is ASP.NET WebForms. If this is the case, and I'm remembering correctly, you cannot have more than one <form> tag per page. You dont need to put 2 form tags on your aspx. All you need to do is bind the data everytime the form loads and every event you've done(for example after submitting ur new data).After that the page will load and load the data that u entered. For example: You can put this code inside the submit button cilcked event. Gridview1.DataSource = Yoursource; GridView1.DataBind(); If u want to load the data every second, that will be different. Since aspx c# runs at server or need to postback before do anything. So ill suggest to use like ajax or the asp update panel to separately load the data every second without posting back. Thank you very much for effort and help There'a a couple ways to handle this. Actually a bunch :) When the left side is submitted, just then bind data to the right hand side. Both sets of content are under the same form tag - which you can include the form runat=server in a master page so everything is contained within. You can use ajax to dynamically load the right hand side as well. I'll post a url here as a ref in one sec. Just create the right hand side as a user control you add into your project. If you are using MVC you can render a child action - though you don't specify. So - in short - include all your controls within the <form runat=server> tags. There are other ways (like the old school way of using ajax control toolkit ala http://www.codeproject.com/Articles/691298/Creating-AJAX-enabled-web-forms which isn't the newer recommended way) and better ways like using single page apps and pure ajax calls but since you are starting out here, let's just keep it simple :) Notice here the form tag is in the master page and all content is contained within the form tag. https://msdn.microsoft.com/en-us/library/wtxbf3hh.aspx
common-pile/stackexchange_filtered
Is it possible to make a microwave frequency oscillator with an LC circuit? Can a Colpitts oscillator circuit consisting of pF value capacitors and nF value SMD inductors be used as a microwave frequency (within 500MHz, 5GHz) oscillator. This seems to be another attempt at your last question, which was unanswered and closed for lack of detail. I'm struggling :'((( At this frequency the path log(length/width ratio) is an inductance but the tolerances dictate the accuracy. Also the area/ gap radio adds capacitance. The standing wave ratio or wavelength also alters the impedance/ reactance. Is it possible? Probably, but the devil is in the details. Rather than ask if it is possible, ask how to calculate the required values. Considering there are online calc, you need not ask. Once you have the theoretical values, then you are in a better position to consider if it is a valid solution. Maybe try describing what you want to do, not how you want to do it. It won't work well to make a discrete 500 MHz Oscillator, without some experience and reason why you would want such a sensitive XO when specs for such are dependent on geometry and not just the discrete values. As such, the phase noise would be exceptionally high and frequency tolerance and sensitivity too high, so these are never used. Originally the technology increased dimensions to reduce the error on ratios and used low-loss materials. It all depends on the requirements (i.e specs) When mixing frequencies this high, the differences are more significant. Instead, high Q resonators are used mostly over 500 MHz or IC's with VCXO's in a PLL. High Q resonators include XTAL, MEMS dielectric, and others. The materials and process controls for high-precision, also drives the cost up significantly. Thus there are many tradeoffs for spurious-free low phase noise, low temp.co., and low initial tolerance microwave oscillators that increases the complexity of the design. The skill in design is to learn all these variables to find the best performance that meets your specs at the lowest cost. Although not always best, here are some examples in Xtal osc. or XO's, VCXO's with PLL's.
common-pile/stackexchange_filtered
calculating end date from start date in jquery I have a requirement to calculate end date given start date and duration. Start date is date and duration is number of years. So end date will be : start date + duration - 1 day. For e.g. start date is 15/06/2012 and duration is 12 months then end date will be 14/06/2013. How can we achieve this? in years...hello...and did I see 12 "MONTHS"??? possible duplicate of How to add number of days to today's date? I always create 7 functions, to work with date in JS: addSeconds, addMinutes, addHours, addDays, addWeeks, addMonths, addYears. You can see an example here: http://jsfiddle.net/tiagoajacobi/YHA8x/ How to use: var now = new Date(); console.log(now.addWeeks(3)); console.log(now.addYears(1)); console.log(now.addDays(-20)); This are the functions: Date.prototype.addSeconds = function(seconds) { this.setSeconds(this.getSeconds() + seconds); return this; }; Date.prototype.addMinutes = function(minutes) { this.setMinutes(this.getMinutes() + minutes); return this; }; Date.prototype.addHours = function(hours) { this.setHours(this.getHours() + hours); return this; }; Date.prototype.addDays = function(days) { this.setDate(this.getDate() + days); return this; }; Date.prototype.addWeeks = function(weeks) { this.addDays(weeks*7); return this; }; Date.prototype.addMonths = function (months) { var dt = this.getDate(); this.setMonth(this.getMonth() + months); var currDt = this.getDate(); if (dt !== currDt) { this.addDays(-currDt); } return this; }; Date.prototype.addYears = function(years) { var dt = this.getDate(); this.setFullYear(this.getFullYear() + years); var currDt = this.getDate(); if (dt !== currDt) { this.addDays(-currDt); } return this; }; You cannot do this in jQuery, you have to do it Javascript like so: <script type="text/javascript"> // Todays date in milliseconds var today=new Date().getTime(); // Add milliseconds in a year (minus one day) to todays date var yearLater=new Date(today + ((31557600 - 86400) * 1000)); document.write(yearLater); </script> there won't be any time component in date value.. it is in format dd/mm/yyyy You don't need jQuery to add date. Javascript is glorious enough to do that. var myDate=new Date(year,month,date,0,0,0).getTime(); var day_milli= 1000*60*60*24; var newDate=new Date(myDate + day_milli * (duration -1)); alert(newDate); there won't be any time component in date value.. it is in format dd/mm/yyyy. .even output i need in dd/mm/yyyy format. That is why time component is set to zero. And you should refer to javascript date functions also http://blog.stevenlevithan.com/archives/date-time-format. You can always get date in your desired format from a date object. http://blog.stevenlevithan.com/archives/date-time-format YOU ARE REALLY LAZY I tried the way you said ..I get newDate as Sat Aug 18 2012 00:00:00 GMT+0530 (India Standard Time) .. but I am not able to format it as dd-mmm-yyyy(18-Aug-2012) An often ignored feature of the Javacript Date object is that setting the individual date attributes (date, month, year, etc) to values beyond their normal range will automatically adjust the other date attributes so that the resulting date is valid. For example, if d is a date then you can do d.setDate (d.getDate () + 20) and d will be the date 20 days later having adjusted months and even years appropriatley as necessary. Using this, the following function takes a start date and a duration object and returns the date after the specified time period : function dateAfter (date, duration) { if (typeof duration === 'number') // numeric parameter is number of days duration = {date:duration}; duration.year && date.setYear (date.getYear () + duration.year); duration.month && date.setMonth (date.getMonth () + duration.month); duration.date && date.setDate (date.getDate () + duration.date); return date; } // call as follows : var date = dateAfter (new Date (2012, 5, 15), {year: 1}); // adds one year console.log (date); date = dateAfter (date, {month:5, date: 4}); // adds 5 months and 4 days console.log (date); date = dateAfter (date, 7 * 4); // adds 4 weeks console.log (date); // to return the end date asked by the op, use var date = dateAfter (new Date (2012, 5, 15), {year: 1, date: -1});
common-pile/stackexchange_filtered
How to implement third Nelson's rule with Pandas? I am trying to implement Nelson's rules using Pandas. One of them is giving me grief, specifically number 3: Using some example data: data = pd.DataFrame({"values":[1,2,3,4,5,6,7,5,6,5,3]}) values 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 5 8 6 9 5 10 3 My first approach was to use a rolling window to check if they are in/decreasing with diff()>0 and use this to identify "hits" on the rule: (data.diff()>0).rolling(6).sum()==6 This correctly identifies the end values (1=True, 0=False): values correct /desired 0 0 0 1 0 1 2 0 1 3 0 1 4 0 1 5 0 1 6 1 1 7 0 0 8 0 0 9 0 0 10 0 0 This misses the first points (which are part of the run) because rolling is a look behind. Given this rule requires 6 points in a row, I essentially need to evaluate for a given point the 6 possible windows it can fall in and then mark it as true if it is part of any window in which the points are consecutively in/decreasing. I can think of how I could do this with some custom Python code with iterrows() or apply. I am, however keen to keep this performant, so want to limit myself to the Panda's API. How can this be achieved ? With the following toy dataframe (an extended version of yours): import pandas as pd df = pd.DataFrame({"values": [1, 2, 3, 4, 5, 6, 7, 5, 6, 5, 3, 11, 12, 13, 14, 15, 16, 4, 3, 8, 9, 10, 2]}) Here is one way to do it with Pandas rolling and interpolate: # Find consecutive values df["check"] = (df.diff() > 0).rolling(6).sum() df["check"] = df["check"].mask(df["check"] < 6).mask(df["check"] >= 6, 1) # Mark values df = df.interpolate(limit_direction="backward", limit=5).fillna(0) Then: print(df) # Output values check 0 1 0 1 2 1 2 3 1 3 4 1 4 5 1 5 6 1 6 7 1 7 5 0 8 6 0 9 5 0 10 3 0 11 11 1 12 12 1 13 13 1 14 14 1 15 15 1 16 16 1 17 4 0 18 3 0 19 8 0 20 9 0 21 10 0 22 2 0 This solves the problem but still has an explicit for loop, is there anyway to remove that ? Right. I've removed the for-loop and also the call to apply (since it is a for-loop in disguise). Cheers.
common-pile/stackexchange_filtered
Does using a laptop riser help make workplace more ergonomic? I use a laptop (a 13-inch Macbook Pro) for my daily programming work. I've faced some back problems in the past, which I currently address through stretching exercises (via yoga). Is there a way to modify my environment to improve the ergonomics of using a laptop all day? Have laptop risers been found to help with this as claimed in this article? I understand a caveat that comes with the use of a laptop riser is that you have to use an external mouse. How do the ergonomics of stand + external mouse compare to those of a plain laptop? It would be nice to hear what health benefits did you get by using a laptop with a riser. This will help me decide if it's really worth to get one(although I do understand it might not give me all the benefits that it might give another). A riser may work for one person but not another. You need to find what works for your desk, your computer, and your body. There is no "one size fits all" answer here. boddhisattva welcome to The Workplace. I've made a fairly major edit to your question to focus it on something that we can answer rather than an opinion survey. If I've misunderstood you please feel free to [edit] further. Please check out our short [tour] for more about the kinds of questions we're looking for here. Thanks. Monica, Thanks for having me on this forum. I've edited the question a bit further. Hope it doesn't sound like an opinion survey with my edits. I shall go through the tour, thanks for pointing me to that. @alroc I understand where you're coming from. I've edited the question a bit. I'd like to know the benefits if in case you've experienced any with the riser. It'll help me decide whether it's worth getting one or not. Thanks You are using a 13" computer for programming? I hope you have an external keyboard and monitor plugged in? If not, I'd say that should be your first buy. @Fredrik Valid point. I do have one such setup at office. I guess I might need to get one for home use as well. A riser will help your ergonomics if you can't otherwise position the display and keyboard in such a way that they are comfortable/proper. The presence of the riser will not magically make your setup "more ergonomic." It's quite possible to have the wrong riser setup continue or even cause new/worse problems. The only way to know for certain is to try it with your work space. If you have a plug-in keyboard, you can use a stack of books, reams of papers, or even cardboard boxes to change the positioning of your laptop display. Or get an external display and get that set up to the appropriate height/position. the OP has a laptop not a workstation which is the problem laptops always have much worse ergonomics than a desk top its also not economically efficient to pay the premium for laptops if they spend 99% of the time on a desk. Workstation meaning desk/cubicle/office space including chair. @alroc - Thanks for pointing ways to try this. I see that even Jeanne(the other user who answered this question) uses a similar setup. It would be nice if you'd add more of your experience to your answer(it could probably just give me more pointers) as to what worked for you. I'd accept your answer as you brought up the point about using books first as a way of trying things out to start with. Thanks. Laptops are poor for ergnomics. It's not possible to be at a good position for both keyboard/mouse and screen. This is why people often have to hunch down to use them. They are trying to properly position for keyboard/mouse. (Which I don't feel like I can achieve on a desk - I use a keyboard tray). But presuming you do achieve a good position for keyboard/mouse, you are now way to low for your back/neck to look at the screen. A laptop riser alone can make the neck angle better. But then the laptop is to high for the keyboard/mouse. You should use an external keyboard and not just an external mouse. At home, I have the following setup: Keyboard tray with external keyboard and mouse. This is lower than the desk so a comfortable position to type. My Mac laptop on top of a stack of three telephone books. (A telephone book is about the same thickness as a computer book. An external monitor at the same height as the top of the laptop. That's an interesting point that you brought up. I can try this with a set of books first and see if I find a difference and then this can help me decide whether getting a riser would be really helpful or not. Yep, we'd need an external keyboard too, it's my bad that I forgot mentioning it in the question. Thanks for the wonderful insights as to what's your current setup and what works for you, +1 .
common-pile/stackexchange_filtered
Use of the definite article "the" before "church" I was in a Teacher's selection for a school in my country, and one of the coordinators said that she heard a mistake from another teacher that was unacceptable. I tried to figure out why was that, but I thought it was silly and forgot about it. Then I was asking about the weekend in my classroom and one of my Ss said the same sentence. I corrected him according to the coordinator, as she is way more experienced than me, but I couldn't actually explain why to him. The sentence was: I went to the church. I can't see the mistake in this sentence if the church had been previously mentioned in the context of the conversation. I understand, as a non-native speaker, that if you are talking to a person that doesn't have any idea of where you were and doesn't have any previous information about the specific place, the article 'the' should not be used. Also, I am assuming church as a count noun. So instead, we would say: I went to a church. Is it correct to use the indefinite article since I don't have any idea of which church he is talking about? I made a research about it and found that places that people use in common (like school, church, hospital, work) but are not necessarily the same, we would omit the article, so we would use the sentence that the coordinator accept: I went to church. Like this sounds strange to me, but since I am not a native speaker, I think that it is OK. I really don't think that it was an unacceptable mistake, since the use of the article will depend on the context. So, if I am talking to my student, asking what he did last weekend and we were not talking about anything before, which one should he use? See also: http://english.stackexchange.com/questions/19604/is-there-a-reason-the-british-omit-the-article-when-they-go-to-hospital I'm a native speaker of Australian English, and I'd accept any of those variants without blinking - the meaning changes, but none of them are ungrammatical. I'd use "I want to the church" to mean "I went to meet with the organisation I call 'The church,'" "I went to a church" to mean "I went to visit a building (or possibly an organisation)," and "I went to church" to mean "I went to my local church, most likely in order to attend a service." Related if not duplicate: http://english.stackexchange.com/q/19604/8019. I wrote a fairly detailed answer to a similar question a few years ago: http://english.stackexchange.com/questions/67036/how-do-american-english-and-british-english-use-the-definite-article-differently/67040#67040 @user867 It definitely depends on context. If you were talking about a village or town where there was only one church and you visited it for any purpose other than attending a service you would say "I went to the church" as there can be no confusion. An example would be "While I was in the village where my grandfather was born I went to the church to see if I could find my great-grandparents' grave". What the coordinator was probably correcting was the use of "went to the church" to mean "attended a routine church service" which is normally "went to church" This may not be relevant to your example, but there is another potential meaning for going to the church. If you have some question about doctrine, for example, and are speaking in reference to a specific religion or denomination, "I went to the church" could mean that you consulted with someone of authority representing that religion for a "definitive" answer. I am no expert, but I am a native English speaker (American). I would interpret "I went to church" to mean "I attended a church service". "I went to the/a church" would imply I visited a building. Likewise for went to school but not went to the store. The usage varies from one example to another, sometimes also between dialects (e.g., went to [the] hospital). The interesting case is “hospital”.    Americans say, “I went to the hospital” – the more common third-person form is “he is in the hospital” – whereas other English speakers will omit the article, and say, “he is in hospital.” Similarly we say "I went to work" but "I went to the theatre" or "... the cinema" even when the paricular theatre or cinema has not been specified. Understanding here depends less on the meanings of church than on the meanings of go. There are numerous uses of go. Most commonly it refers to moving or traveling somewhere. In this sense, and when by church we mean a building used for Christian worship, we use the article with church according to the usual rules: I'm sure I lost my camera in Montmartre; I went to a church there— but I don't recall which one— and left it in a pew. We stopped for lunch in a small town, and I walked around a bit after we ate. I went to the church, then the square, then got an ice cream soda at the drugstore. Go can also mean to attend or visit a place or type of place for a particular purpose, however. To say you go to church means not only that you physically situate yourself at the building, but that you are engaged in regular worship services there. In this sense, you do not use an article. I went to church in the morning so I could watch the football game later. I went to church growing up, first Blessed Sacrament and then St. Ann's after we moved. But I lapsed when I moved to the city. The same change of meaning applies for a number of other words which can denote both a location and a particular engagement: court, school, market, town, and so on. To go to a jail is to visit a penitentiary facility; to go to jail is to be incarcerated; don't mix up the two in conversation. English being English, unfortunately, this is not a strict rule. Most geologic features, for example, require a definitive article when used in either a specific or generic sense: the mountains, the shore, the woods, etc. Certain proper nouns always take the definitive article as well. Thus, a simple statement can be ambiguous. I went to the Church of St. Luke when I lived in Lexington. could mean that you once visited the building known as St. Luke's, but it could also mean that you were a regular parishioner who attended services every Sunday. I went to the beach last summer. could mean you visited a particular beach once last summer, but it could also mean you went to one or a number of different beaches as a regular activity last summer. As always, context is key. When the word church is used with an article, as in the church or a church, it refers to a building or a religious organization. The word church without an article refers to a worship event or activity. Where I live in the United States, “I went to the church” and “I went to church” have distinct meanings. The first means I went to a building that is a church; the second means I participated in religious activities (probably at a church). You always include the article when talking about a building, and always omit the article when talking about church as an event or activity. Here’s another way to think of it: There is no verb form of church. If there were, We might say “I churched” to mean the same thing as “I went to church”, but we would still say “I went to the church”. [There actually is a verb form of "church", but its meaning doesn't quite fit what I was going for here.] There are other place names that follow a similar pattern in American English. For example, “I went to school” means that I was a student, while “I went to the school” means I went to a place of learning. There are verb forms of both church and school. School, yes. I see a verb form of church listed in the dictionary, but in spite of being a native English speaker I have never heard it used as a verb and would not recognize it as being something other than an error if someone used it that way. Regardless, the comparison was meant as a way to help people wrap their heads around the meaning of church without an article. @Darryl - While it's less common than "schooled", I've definitely heard/read "churched" on numerous occasions. And, in fact, "unchurched" is quite common. @HotLicks - Must be a regional thing. I've never heard it used as a verb. I have, however, heard "unchurched" used as an adjective. @Darryl - 1911: "One purpose of this report is to reveal the over- churched communities in Wisconsin." 1905 quoting1639: “Doth the woman who is to be churched use the antient accustomed habit in such cases" 1982: "and she was very careful not to meet anyone for 3 weeks until she had been 'churched'." 1980: "After a fortunate confinement and after the child had been baptised, every Christian mother performed the devotion of being churched." 1924: "When Miss Susannah was born—that's Miss Honoria's mother—she went to be churched." Now I've heard it used as a verb :-) It's unfortunate that a peripheral detail that in no way changes the main point of the post has taken taken center stage in the comments. This is by no means an answer from an English language expert, but one from someone with an idea. We have definite and indefinite articles. In this case, we are dropping both of them. We don't use "the" nor do we use "a". We are implying an even closer relationship than definite. We are implying a personal relationship: ours. Our church. Our school. My work. So we drop the article entirely. We don't need it. We know what we're talking about. If you can think of the noun as something that is "yours," something that you are part of, something you belong to in some way, as a member, as a participant, as a student, a patient, a guest, whatever, then you most likely can drop the article. Just another slippery feature of English! If that’s the rule, why do we say, “he is in jail” rather than “he is in the jail”? I don't think this is right. I personally go to church (sometimes) when I am away from home, even if I have to ask at hotel reception which is the closest church or where there is a church I can attend. -1 I was a stranger in town with no sense of belonging to anything or anyone. I am an atheist but I went to church out of boredom. When a place is visited regularly as a part pf a routine. We will not use the when referring to the place. Example: Children go to school everyday. However, two exceptions to this rule are 'temple' and 'Mosque'.Hence we should use the in front of these words even if referring to a routine. Thanks I have definitely heard Jews in the US talk about "going to temple". And when one uses "go to school" the word "school" is being used as a mass noun, rather than referring to a specific school.
common-pile/stackexchange_filtered
Macros in C++ breaks with postfix increments I need a working MAX macros (without(!) declaring main function) which assign 'r' the maximum of numbers 'a' and 'b'. This code breaks in compilation. How can it be fixed? #define MAX(x, y, r) ((x) > (y) ? (r = x) : (r = y)) int x = 10; int y = 20; int r; MAX(x, y, r); Thanks for watching! UPD: Some revision to clear the full task: #import <iostream> #define MAX(x, y, r) ((x) > (y) ? (r = x) : (r = y)) int x = 1; int y = 1; int r = 1; int main() { MAX(x++, y, r); std::cout << r; return 0; } The result of this code is 1, and need to be 2. So I need another logic in my macros to consider all postfix increments why 2) what is compilation error Are you doing this outside of a function scope? That won't work, you can't put arbitrary expressions there. Your update is a prime example of why you shouldn't use macros, as any code you pass to them may be executed multiple times. Do you absolutely need a macro for this? Yes, I need working macros. Okay, and you want x++ to be executed only once, but the value considered in the macro to be that of the variable used in the expression after the expression has been executed? Not fully understand question. The point is I only need 'magic' 2nd line with macros which consider all postfix increments with numbers 'x' and 'y', including the fact that number 'r' will also be define as some number like in example (UPD version). If you want that, you'd need to pass the name of the variables alongside an expression to evaluate, because the preprocessor can't extract the name of the variable from an expression used on it. Why do you need macros? There are very rarely actual requirements on code used to do something. In 99.9% of cases, requirements are on the results of a process, not on the code used to do it. Why is e.g. an inline function not usable? IOW, this smells of an XY problem. It's using in online test to graduate, I know it's weird but I need to do it only that way. You need to ensure that the macro arguments are evaluated only once. Hint: use a lambda. kirbrown: Also, please do not change your question to a different question once people have provided answers to the original question. Doing so invalidates the answers, which were contributed in good faith. There is no cost for asking a new question. "The result of this code is 1, and need to be 2." I'm sorry but with x++ it doesn't make sense. Logically it should be 1 with x++ and 2 with ++x. Do you need x++ and ++x to behave in the same way? The story is simple: I'm trying to pass some online course and I have the task about all of that staff, It's hard to say what the machine that compiles code and throw into it some testing numbers do with them, I hardly understand what it's want from me. Input already have the main() function functionality, so all starts working in my machine after I got the answer given by @SingerOfTheFall. All is good, but main problem will be my problem, because I can't understand how to say about it. You can't use this macro outside of a function, because it's an arbitrary expression, that's why you're getting an error. Just move the invocation of the macro into function scope and it will work: #define MAX(x, y, r) ((x) > (y) ? (r = x) : (r = y)) int x = 10; int y = 20; int r; int main() { MAX(x, y, r); } Using macros in this case is, however, unnecessary (unless this is just an exercise to learn macro usage); making max a function (or, better yet, using std::max) would be a better and less error-prone way. This technically works, but I don't think advocating macros for this use is a good idea. And using r = std::max(x, y) would be even better than rolling out your own. @Angew, I think the OP is just learning this, so he will not use std::max anyway, but I've added that to the answer, thanks. How does this solve the post-increment problem? It still evaluates twice whichever argument turns out to be the max. @rici, the op hadn't updated his question at the time I wrote this. It doesn't work because you can't put arbitrary expressions at file-scope. I have a couple of suggestions: Don't use global variables unless you really, really have to. They'll just cause you pain. Don't use macros unless you really, really have to. They'll just cause you pain. Here's what I'd do: int main() { int x = 10; int y = 20; int r = std::max(x, y); //pass x, y and r as arguments to functions rather than using globals } Thanks, I now about the rules, but I have specific task witch needs to be done with macros. @kirbrown Well you could just do #define MAX(x, y) ((x) > (y) ? (x) : (y)) then int r = MAX(x,y), but I really wouldn't advise it.
common-pile/stackexchange_filtered
Can we make a symmetric wavefunction out of two anti-symmetric wavefunctions? And, if so, then can be say that we've made a boson out of two fermions? Mathematically, If f=fermion=f(x,y) then b=boson=[f(x,y)-f(y,x)]/2 Yes, the product of two antisymmetric wavefunctions is symmetric. However, such a system is not a fundamental particle, it is a composite system (of an even number of fermions). Such systems do exist: they are called composite bosons (in order to distinguish them from elementary bosons like the photon or the Higgs boson). Examples include Cooper pairs in superconductors, and superfluid Helium-4, a Bose-Einstein Condensate. That's interesting. Because the spin of two fermions add to 1 and two fermions also make a boson. So does this kinda sorta almost prove the spin statistics theorem?
common-pile/stackexchange_filtered
Detect scroll in 100% height fixed div I have a 100% height x 100% width fixed main container on my website, in which I would like to detect the scroll event. The page structure could be described as a slider with fixed active slide and non-active slides being positioned absolutely below the main container. I got it working perfectly when I have a pager on the side. The thing is, I would also like user to be able to switch between slides using only mouse scroll (go to next slide on scroll down, and drop the slide down revealing the previous one, while scrolling up). Is there any way I can do it, keeping my page structure as is? The fiddle can be found here: http://jsfiddle.net/tts4nhun/1/ (it's just a quick recreation and I think a few things are unnecessary in css, but they don't really change anything when it comes to the content of this question). HTML: <div class="wrapper"> <div class="slide slide-1"></div> <div class="slide slide-2"></div> </div> CSS: .wrapper { position:absolute; background:red; width:100%; height:100%; top:0; left:0; overflow:hidden; } .slide { width:100%; height:100%; -webkit-transition: all 0.5s ease-in-out; -moz-transition: all 0.5s ease-in-out; -ms-transition: all 0.5s ease-in-out; -o-transition: all 0.5s ease-in-out; transition: all 0.5s ease-in-out; } .slide-1 { position:fixed; top:0; left:0; background:blue; } .slide-2 { position:absolute; top:100%; left:0; background:green; } .slide-2.active { position:fixed; top:0; left:0; } jQuery : $('.slide-1').click(function(){ $('.slide-2').addClass('active'); }); I'm not asking about checking if the user scrolled down or up, but if they did at all. I'll be very grateful for any advice. Thanks, E. e: To clarify, I want the scrolling itself to be disabled (user shouldn't be able to scroll down the website in a traditional way). The reason I want to detect the scroll event is, I want to swap active classes on a single scroll down or up. (That is also the reason why I want to keep the overflow:hidden property - I want the website to resemble a slideshow and use the scroll up/scroll down events just like up/down arrows). Please clarify your qquestion ! The only way I know around this is to make your page scrollable but all the elements fixed. So say body { height: 500%; } and #body > container { position: fixed; } then detect the onscroll and trigger at certain points.. My bad. Updated it now. Seems like the plugin below solved all the problems I had with making it work across all the browsers. http://www.ogonek.net/mousewheel/jquery-demo.html For Chrome and Firefox: $(window).on('mousewheel DOMMouseScroll', function(e){ if(e.originalEvent.detail > 0) { //scroll down $('.slide-1').removeClass('active'); $('.slide-2').addClass('active'); }else { //scroll up $('.slide-1').addClass('active'); $('.slide-2').removeClass('active'); } //prevent page fom scrolling return false; }); I'm not certain but Jason is right the mousewheel event is also something to watch out for. I think you will run into problems with these solutions on IE though. If you only have two slides (and the direction of the scroll doesn't matter), you could also consider something like: $(window).on('mousewheel DOMMouseScroll', function(e){ $('.slide').toggleClass('active'); //prevent page fom scrolling return false; }); Update: Okay I was bored so now I've made this (maybe I should say that I made my own assumptions about what this effect needed to look like so sliding is not necessarily ideal but I like that it toggles): $(".slide-1").slideDown(); $(window).on('DOMMouseScroll mousewheel', function(event) { $(".slide-1").slideToggle(); $(".slide-2").slideToggle(); }); .wrapper { position: absolute; background: red; width: 100%; height: 100%; top: 0; left: 0; overflow: hidden; } .slide { position: absolute; width: 100%; height: 100%; top: 0; } .slide-1 { background: blue; } .slide-2 { background: green; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div class="wrapper"> <div class="slide slide-1"></div> <div class="slide slide-2"></div> </div> Sadly no, more than two :) Thanks a lot for help. Hope you won't mind if I mark Jason's comment as the right answer, since it's working fine and includes the fiddle. You could try the following: $(".wrapper").on('mousewheel DOMMouseScroll', function (event) { if (event.originalEvent.wheelDelta > 0 || event.originalEvent.detail < 0) { $('.slide-2').addClass('active'); } else { $('.slide-2').addClass('active'); } }); See fiddle: http://jsfiddle.net/audetwebdesign/682opyoz/ There may be some cross browser issues and you will need to make sure that the user can get back to slide-1. Reference: Get mouse wheel events in jQuery? Note: If you have more than two slides, this approach could get complicated. As far as detecting the mouse wheel event, you might need to write you own listener as discussed at: https://developer.mozilla.org/en-US/docs/Web/Events/wheel You need to make sure that you can detect scroll-up/down in a cross-browser robust way. You then need to know which slide you are on and show the next one up/down as required. You're right. I started testing and there is no chance to detect the scroll up with the above solution. So, what is needed is to detect scroll up and scroll down and switch back and forth from slide-1 to slide-2? Yeah, actually the example is very simplified and has 2 slides only, in reality there are multiple. I added some comments that might help, but the solution requires more work than I have time for, but an interesting problem. I suspect there are jQuery plug-ins that do this. In fact you're right! I just only found the plugin for it and it seems to have fallback even for ie8, which is my real Nemesis :) http://www.ogonek.net/mousewheel/jquery-demo.html http://jsfiddle.net/tts4nhun/8/ This works in IE9 and Chrome. I don't have firefox. Let me know. $(window).on('mousewheel', function(event) { if (event.originalEvent.wheelDelta >= 0) { $(".slide-2").removeClass("active"); $(".slide-2").animate({top: "100%"},500,"linear"); } else { $(".slide-2").addClass("active"); $(".slide-2").animate({top: "0"},500,"linear"); } }); "As of jQuery 1.7, the .on() method is the preferred method for attaching event handlers to a document" http://api.jquery.com/bind/ Ah thanks for pointing it out. Entalpia please change the bind to on. I was very optimistic accepting the answer, but it does have some flows, still. It works only in Chrome and when I try to use it cross browser, I only get more and more problems (even after chaging the above to $(window).bind('mousewheel DOMMouseScroll MozMousePixelScroll', 'on', function (event) - then it doesn't scroll back up). or you can simply use vanilla JavaScript Like this: window.addEventListener('wheel', function(e) { direction = e.deltaY if (direction > 0) { console.log('scrolling up') }else{ console.log('scrolling down') } })
common-pile/stackexchange_filtered
Teradata Large Table Partitioning or Split into multiple parts We have a large table (4 Terabytes) in Teradata. For compliance reasons, we need to keep 13 months worth of data. However, for querying, we may only need one month worth of data (300GB). I have already added partitioning on a Timestamp column for a day interval. But I was wondering if I could split the table into 2 parts like below; would it help in query performance? Table1 -- Contains data for last 30 days Table2 -- Contains data older than a month but newer than 13 months As long as you get partition elimination/pruning (based on your WHERE-condition on the timestamp column) read performance will not improve. But if you got additional indexes on the table you might implement them only on the 30 day table, saving 90% of the index space. It makese sense. Thanks.
common-pile/stackexchange_filtered
Coplanar codition My question is simple I want to verify two question. These question is started in Hartshorne book Proposition (IV.3.8). Let $X$ be a projective variety in $\mathbb{P}^n$ the set $\{(p,q)\in X \times X | L_p \cap L_q \neq \emptyset\}$ is a closed set where $L_p$ is the tangent line to $X$ at $P$? Fix point $p\in \mathbb{P}^n$, the set $\{q\in X | p\in L_q\}$ is a closed set?
common-pile/stackexchange_filtered
A question about the well-ordering theorem In Munkres' book "Topology", he states that Well-Ordering Theorem: If $A$ is a set, there exists an order relation on $A$ that is a well-ordering. Then, in his discussion of the Maximum Principle, he writes that For a given uncountable set $A$, we know from the well-ordering theorem that there is a bijection between $A$ and some well-ordered set $J$ so that we can index the elements of $A$ with the elements of $J$, i.e. $A = \{a_\alpha : \alpha \in J\}$ My understanding of the Well-ordering theorem is that for a given set $A$, we can put a well-ordering on this set, so is the set $J$ that Munkres refers to actually the set $A$ itself and the bijective map is the identity map $f(\alpha) = \alpha$ for $\alpha \in A$? Can this set $J$ be something different? I think you could assume that, yes. We could assume $J$ is a cardinal in its standard ordinal well-order, if you know some set theory. Then the same index set is "canonical" in a way, and the same one could be used for many sets. But taking the identity and using the promised well-order from the first theorem would work too. I see. But, from the first theorem alone, is there a way to show existene of some bijective map from the set $A$ to some well-ordered set $J$, which is not equal to the set $A$ itself? Munkres didn't write that we can take $A$ as $J$. Why? I think he is a very kind writer. $J$ can be anything that has the same cardinality with $A$. For example, you can index $\{k\in\Bbb N\mid k\text{ is even}\}$ using the natural numbers, $\{2n\mid n\in\Bbb N\}$; or using the odd natural numbers, $\{k-1\mid k\in\Bbb N, k \text{ is odd}\}$; or using any other countable set. How do I know that a set with the same cardinality with A exists? Well, ${A}\times A$ have the same cardinality as $A$. And assuming the usual axioms of set theory, ${A}\times A\cap A=\varnothing$.
common-pile/stackexchange_filtered
Finding the Outlines of a Graph I'm trying to find the equations of the outlines of a graph in python with matplotlib.pyplot (in terms of μ and σ). I already, numpy and scipy outlined the graph, but I can't find a way of getting the equations of both of the lines in terms of the mean and standard deviation. import matplotlib.pyplot as plt import numpy as np from scipy.spatial import ConvexHull x_axis = [] y_axis = [] nth_triple = [] average_triple = [] def return_average(lst): counter = 0 for el in lst: counter += el return counter/len(lst) def pythagoreanTriplets(limits) : counter = 0 c, m = 0, 2 while c < limits : for n in range(1, m) : a = m * m - n * n b = 2 * m * n c = m * m + n * n if c > limits : break triple = [a, b, c] x_axis.append(a) y_axis.append(b) nth_triple.append(counter) average_triple.append(return_average(triple)) counter += 1 m = m + 1 limit = 100000 pythagoreanTriplets(limit) points = np.column_stack((nth_triple, average_triple)) hull = ConvexHull(points) mu_x = np.mean(points[hull.vertices, 0]) mu_y = np.mean(points[hull.vertices, 1]) sigma_x = np.std(points[hull.vertices, 0]) sigma_y = np.std(points[hull.vertices, 1]) plt.scatter(nth_triple, average_triple, s=1, marker=',', color='orange', edgecolors='none') plt.plot(points[hull.vertices, 0], points[hull.vertices, 1], color='black', label='Convex Hull Outline', linestyle='dotted') plt.show() The dashed lines are the equations I want to find: Any guidance would be greatly appreciated. Thanks in advance. The standard deviation has no bearing on the matter; you've plotted the mean so we'll analyse the mean. You failed to mention this, but you're using Euclid's formula. That's the easy part. The trickier part is that you have a non-monotonic n. You need to do some algebra to figure out a continuous function for m as a function of your counter index; then we have import math import matplotlib import matplotlib.pyplot as plt import statistics import numpy as np ''' a**2 + b**2 == c**2, all integers; a = m*m - n*n b = 2*m*n c = m*m + n*n (a + b + c) = m*m - n*n + 2*m*n + m*m + n*n = 2*m*m + 2*m*n = 2*m*(m + n) (m-1)(m-2)/2 == counter m2 - 3m + 2 - 2counter = 0 m = (3 + sqrt(9 - 4*(3 - 2counter)))/2 m = 0.5*(3 + math.sqrt(9 - 8*(1 - i)) Minimum when n = 1, (a + b + c)/3 = 2*m*(m + 1)/3 Maximum when n = m - 1, (a + b + c)/3 = 2*m*(m + m - 1)/3 = 2*m*(2*m - 1)/3 ''' def pythagorean_triplets(limits: int) -> tuple[ list[int], list[int], ]: nth_triple = [] average_triple = [] counter = 0 c, m = 0, 2 while c < limits: for n in range(1, m): # Euclid's formula a = m*m - n*n b = 2*m*n c = m*m + n*n if c > limits: break triple = (a, b, c) nth_triple.append(counter) mean = statistics.mean(triple) average_triple.append(mean) counter += 1 m += 1 return nth_triple, average_triple def main() -> None: nth_triple, average_triple = pythagorean_triplets(limits=10_000) matplotlib.use('TkAgg') plt.scatter( nth_triple, average_triple, s=3, marker=',', color='orange', edgecolors='none', ) m = 0.5*(3 + np.sqrt(9 - 8*(1 - np.array(nth_triple)))) plt.plot(nth_triple, 2*m*(m + 1)/3) plt.plot(nth_triple, 2*m*((m - 1)*2 - 1)/3) plt.show() if __name__ == '__main__': main()
common-pile/stackexchange_filtered
submit payload nonce to server Braintree SDK Paypal ruby rails Im uisng Braintree Paypal SDK to render paypal button in form with hosted feilds. however, I cannot figure out how to submit nonce to server. how do i do that in this section?. onAuthorize: function (data, actions) { return paypalCheckoutInstance.tokenizePayment(data) .then(function (payload) { // Submit `payload.nonce` to your server //console.log (payload.nonce) }); }, my controller action is def payment Cart.find(session[:cart_id]) result = Braintree::Transaction.sale( amount: current_order.subtotal, payment_method_nonce: params[:payment_method_nonce], :options => { :submit_for_settlement => true}, ) response = {:success => result.success?} if result.success? response[:transaction_id] = result.transaction.id current_order.update(status: "purchased") ReceiptMailer.purchase_order(current_passenger, current_order).deliver_now redirect_to root_path, notice: "Thank you for booking, Please check your email for invoice" session.delete(:cart_id) elsif result.transaction redirect_to cart_path, alert: "something went wrong, your transactions was not successful!" end end You will need to generate a request in javascript to pass the payment nonce to your server. Here's a simple example of generating a request using jQuery's ajax method: $.ajax({ method: "POST", url: "/payment", data: { payment_method_nonce: payload.nonce } })
common-pile/stackexchange_filtered
git push NOT current branch to remote Is there a way in git bare repository to push a branch that is not in HEAD right now? For example i have two branches: $ git branch * master another And i have two remotes set: origin and another. I need to be able push from another to another/another just in one command without changing HEAD. You might consider not having the exact same name for a remote and a branch. It is confusing. You can use git branch -m another another_branch or git remote rename another another_remote @KlasMellbourn, that is just for the purpose of example. Of course i don't have this weird naming. With git push you can specify the remote and the local git push remotename branchname Does it mean that it will push local another to another/another? I always thought it will push current HEAD to another/another. Yes. (The second) another is a refspec, which (in general) has the form src:dst. This means to push the local branch src to the remote branch dst. If :dst is omitted, the local branch src is pushed to the remote branch src. the current HEAD is just the default, but you can specify any branch (o more generally any refspec) as Lars was pointing out @LarsNoschinski So technically I can even specify to push local another to remote/master by doing $ git push another another:master? (of course that's not what i am going to do, but just want to make sure i understand it right). Yes your understanding is correct. That would push the content of the local another to the remote another/master As a matter of fact git push another another is totally equivalent to git push another another:another. I feel like I cannot stand another another, though. Didn't work for me, but that's because git was written by people who hate documentation. I found this explanation was misleading. As of git 2.14.1, I had to do git push origin mybranch:mybranch. This comment thread has gotten out of hand. Is the only way to do it now git push origin otherbranch:otherbranch BTW it's 2019 now. All those "another another" in the original question, the answer and lots of comments are so confusing (which is a perfect example of why it is important to name your things right in the first place), I can't help helping (pun not intended) to write yet another answer as below. Q: Is there a way in git (bare) repository to push a branch that is not in HEAD right now? For example i have two branches and two remotes. I need to be able push from feature to upstream/feature just in one command without changing HEAD. $ git branch * master feature $ git remote origin upstream A: Do git push remote_name branch_name. In the case above, it looks like this. $ git push upstream feature Q: Does it mean that it will push local feature to upstream/feature? I always thought it will push current HEAD to upstream/feature. A: Yes. The feature part is a refspec, which has the form src:dst. This means to push the local branch src to the remote branch dst. If :dst is omitted, the local branch src is pushed to the remote branch src. You can specify a different name as remote branch too. Just do: $ git push upstream feature:cool_new_feature (Thanks @gabriele-petronella and @alexkey for providing materials for this answer.) Note that "upstream" is just an alias for a particular remote. Odds are, if you've been following the most typical convention online (ie, you copy paste a lot), it's actually gonna be "origin". See: https://stackoverflow.com/questions/9529497/what-is-origin-in-git @Kat: while both upstream and origin are "just" an alias for a particular remote, they have different meanings, by convention of github fork workflow. Do not assume you can always replace upstream by origin in any git command, unless you know what you are doing. When I tried to do push a, for the remote, new branch using --set-upstream (-u) git push MyLocalBranch -u origin MyLocalBranch and I did not have the branch checkout out, I received the error fatal: 'MyLocalBranch' does not appear to be a git repository fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Thanks to the comment git push NOT current branch to remote Yes. (The second) another is a refspec, which (in general) has the form src:dst. This means to push the local branch src to the remote branch dst. If :dst is omitted, the local branch src is pushed to the remote branch src. from Lars Noschinski I figured that the solution was to call git push -u origin MyLocalBranch:MyLocalBranch Irrelevant to the question, and not needed as the question is fully and correctly answered. This is more of a diary entry or autobiography of a personal success. You are right matt. But the intention was to help out if anyone else stumbled into this answer like I did. Should have created a new question and answered it.
common-pile/stackexchange_filtered
FileSystemException: Cannot open file, path I'm trying to the read an excel that will be provided from the user via file_picker: ^2.0.13 using spreadsheet_decoder: ^1.2.0, here's the methode of reading the file : void _excelReader({String filepath}) async { var file = filepath; var bytes = File(file).readAsBytesSync(); var decoder = SpreadsheetDecoder.decodeBytes(bytes, update: true); for (var table in decoder.tables.keys) { print(table); print(decoder.tables[table].maxCols); print(decoder.tables[table].maxRows); for (var row in decoder.tables[table].rows) { print('$row'); } } } I get this exception when trying to run : FileSystemException: Cannot open file, path = '(/data/user/0/com.example.sms_sender_app/cache/file_picker/Classeur.xlsx)' (OS Error: No such file or directory, errno = 2) I also tried to read the file via excel: ^1.1.5 and the same exception was catched. Did you add necessary permissions in the manifest file ? This is what i added : Are you sure that the file exists there when you are reading the content ? Can you try opening some other files from a different location ? Actually even when I put some thing like C:/Users/xxx/Files.xlsx in the path, I mean from the local computer Storage, I get the same Exception catched. Use the device file explorer and check whether your file exists. Also copy the excel file to Downloads or Documents folder of your Android device and try to read. One more thing, If you are hard-coding the file-path, that can be an issue. I'm confused because when reading the example writen under excel 1.1.5 plugin in pub.dev, it clearly shows that the path could be from the local computer Storage var file = "/Users/kawal/Desktop/form.xlsx". Also, is the path_provider plugin could be necessary in my case ? Probably the example is for Flutter web, I'm currently checking the Android one. Will update Your code works perfectly with correct path. I've used path_provider to get the correct path and passed that to your function and it printed correct output. Seems like the issue is in your path. Can you please show me how you did ? So if I got this right, the process should be file_picker -> path_provider -> excel_plugin ? If you are using file picker I don't think you need to use path_provider. You can get the file like this `FilePickerResult result = await FilePicker.platform.pickFiles(); if(result != null) { File file = File(result.files.single.path); } else { // User canceled the picker }` Really sorry for the delayed answer, thank you so much for your help that worked, you are a life saver :) No worries. Happy Coding !!!
common-pile/stackexchange_filtered
Updating background colour using batchUpdate in Google Apps Script API I am trying to use Google Apps Scripts batchUpdate to update the styling of a range of cells. I have put together the very simple example to hopefully enable me to get started, however I am getting the following error message back from this. GoogleJsonResponseException: API call to sheets.spreadsheets.values.batchUpdate failed with error: Invalid JSON payload received. Unknown name "requests": Cannot find field. at updateGoogleSheet(fullSheet/fullSheet:316) My code to try and do the update is as follows var data = { requests: [{ updateCell: { range: 'Sheet3!A3', cell: { userEnteredFormat: { backgroundColor: { red: 1 } } }, fields: 'userEnteredFormat(backgroundColor)' } }] }; Sheets.Spreadsheets.Values.batchUpdate(data, spreadsheetId); It would be a lot cleaner to get and set formatting for ranges rather than bath updating cells, why are you choosing to do it this way? It’s because I need to set a lot of different formats at once over a variety of ranges so thought it would be better if I could do it in a batch so I don’t hit concurrency limits which I was before doing individual ranges. Unless you know of a different way which won’t hit the limits of google? I think you should try it with RangeLists, concurrency won't apply because you aren't going through the advanced api. It sounds like what you really want is to declare a rangelist and apply formatting to those in bulk. var sheet = SpreadsheetApp.getActiveSheet(); var rangeList = sheet.getRangeList(['A:A', 'C:C','D4']); rangeList.setBackground('red'); sheet.getRangeList(['B3','F6').setFontFamily("Roboto"); Will this handle hundreds of different ranges though all with different formats? Potentially I’ll have thousands that need processing all different so it’s not like I can just set the format for a large range they all may be different per cell? Will this run into max concurrent thresholds doing it this way? max concurrent is only going to be api related. You might run out of processing time, but I'd test 100 and see how long it takes. I’ve seen an exception of too many simultaneous events being fired before on the GAS side, should this be ok with this? I’ll give it a try tomorrow and get back to you! another option is creating arrays of colors, weights, etc and then applying them as a set Actually the one I’m interested in particularly is the pattern, could this be done as a set? Do you have any snippets you can share? they won't be simultaneous, they will apply one range at a time as it goes through the function Look at this article : https://developers.google.com/apps-script/guides/support/best-practices it explains how to preload formatting cell by cell in an array and then mass apply: sheet.getRange(1, 1, 100, 100).setBackgroundColors(colors); That’s great thanks so much for the info I’ll give these a go tomorrow
common-pile/stackexchange_filtered
Unbound breakpoints in VSCode | React I am trying to debug my react application auth.js file, however, the debug points are unbound. My project structure is as follows: The file which I am trying to debug (auth.js) is under this folder. However, all the Frontend codes are in a different folder named: "Frontend". Therefore, to start the react project, it is required to go inside the path: Frontend/sprint and then enter: npm start. However, when I turn my project in debugger mode, in order to debug the auth.js, the breakpoints are unbound. My launch.json file is attached here below. Further, the debug console is showing the following error: Is there any possible way to fix this debugging issue
common-pile/stackexchange_filtered
EFProf .NET Framework 4.5 I recently started using EF5 with .NET Framework 4.5, I wish to use Entity Framework Profiler, but when I try to initialize it, I get an exception System.InvalidOperationException: System.InvalidOperationException: The type or namespace name 'Infrastructure' does not exist in the namespace 'System.Data.Entity' (are you missing an assembly reference?) To note I have referenced the System.Data.Entity assembly.. maybe I have to setup some assembly binding redirect in app.config? Any suggestion? Thanks in advance This solution works for me, in the same situation: Uninstall Nuget Package: Entity Framework Profiler (I proved to install EFProf from NuGet) Uninstall all instances of Entity Framework Profiler in Control Panel > Add or Remove Programs Download EFprof from HinernatingRhinos webpage. Unzip in any folder Add reference in VisualStudio project to HibernatingRhinos.Profiler.Appender.dll in that folder Start EFProf.exe and execute the VisualStudio solution That´s all
common-pile/stackexchange_filtered
SQL query takes more than an hour to execute for 200k rows I have two tables each with around 200,000 rows. I have run the query below and it still hasn't completed after running for more than an hour. What could be the explanation for this? SELECT dbo.[new].[colom1], dbo.[new].[colom2], dbo.[new].[colom3], dbo.[new].[colom4], dbo.[new].[Value] as 'nieuwe Value', dbo.[old].[Value] as 'oude Value' FROM dbo.[new] JOIN dbo.[old] ON dbo.[new].[colom1] = dbo.[old].[colom1] and dbo.[new].[colom2] = dbo.[old].[colom2] and dbo.[new].[colom3] = dbo.[old].[colom3] and dbo.[new].[colom4] = dbo.[old].[colom4] where dbo.[new].[Value] <> dbo.[old].[Value] from comment; You are probably getting locked. Try to use with (nolock) to verify. Just to be sure, please also add the execution plan. 200,000 rows at a time is too much and performance should be slow. Try using pagination and show 10 - 20 data in one page. This might help a little. I cant make an executing plan, because it wont execute the query, so I will delete some rows first, bare with me Let us continue this discussion in chat. @Fredou, why did you suspect the NULLs? @DuduMarkovitz screenshot is showing nullable field being used in the join / where clause @Fredou - yes it is, but why did you think it might be an issue? The problems was that my table was full of NULL values.... -.- @DuduMarkovitz when playing with null in join/where clause you have to be careful. @Fredou - were you familiar with the behaviour I am describing in my answer? @DuduMarkovitz no exactly but i was going to suggest to handle null in the join clause, doing it in the where does work too @Fredou - I would love to meet with the guy that was responsible to this "feature" :-) @wouterdejong, be explicit too with the join, put INNER JOIN not JOIN, default in INNER ... but it's better when it is explicit @Fredou better how? The only way I can think of is from a readability perspective for people who don't know that JOIN and INNER JOIN are functionally identical... It seems that for an equality join on a single column, the rows with NULL value in the join key are being filtered out, but this is not the case for joins on multiple columns. As a result, the hash join complexity is changed from O(N) to O(N^2). ====================================================================== In that context I would like to recommend a great article written by Paul White on similar issues - Hash Joins on Nullable Columns ====================================================================== I have generated a small simulation of this use-case and I encourage you to test your solutions. create table mytab1 (c1 int null,c2 int null) create table mytab2 (c1 int null,c2 int null) ;with t(n) as (select 1 union all select n+1 from t where n < 10) insert into mytab1 select null,null from t t0,t t1,t t2,t t3,t t4 insert into mytab2 select null,null from mytab1 insert into mytab1 values (111,222); insert into mytab2 values (111,222); select * from mytab1 t1 join mytab2 t2 on t1.c1 = t2.c1 and t1.c2 = t2.c2 For the OP query we should remove rows with NULL values in any of the join key columns. SELECT dbo.[new].[colom1], dbo.[new].[colom2], dbo.[new].[colom3], dbo.[new].[colom4], dbo.[new].[Value] as 'nieuwe Value', dbo.[old].[Value] as 'oude Value' FROM dbo.[new] JOIN dbo.[old] ON dbo.[new].[colom1] = dbo.[old].[colom1] and dbo.[new].[colom2] = dbo.[old].[colom2] and dbo.[new].[colom3] = dbo.[old].[colom3] and dbo.[new].[colom4] = dbo.[old].[colom4] where dbo.[new].[Value] <> dbo.[old].[Value] and dbo.[new].[colom1] is not null and dbo.[new].[colom2] is not null and dbo.[new].[colom3] is not null and dbo.[new].[colom4] is not null and dbo.[old].[colom1] is not null and dbo.[old].[colom2] is not null and dbo.[old].[colom3] is not null and dbo.[old].[colom4] is not null Using EXCEPT join, you only have to make the larger HASH join on those values that have changed, so much faster: /* create table [new] ( colom1 int, colom2 int, colom3 int, colom4 int, [value] int) create table [old] ( colom1 int, colom2 int, colom3 int, colom4 int, [value] int) insert old values (1,2,3,4,10) insert old values (1,2,3,5,10) insert old values (1,2,3,6,10) insert old values (1,2,3,7,10) insert old values (1,2,3,8,10) insert old values (1,2,3,9,10) insert new values (1,2,3,4,11) insert new values (1,2,3,5,10) insert new values (1,2,3,6,11) insert new values (1,2,3,7,10) insert new values (1,2,3,8,10) insert new values (1,2,3,9,11) */ select n.colom1, n.colom2 , n.colom3, n.colom4, n.[value] as newvalue, o.value as oldvalue from new n inner join [old] o on n.colom1=o.colom1 and n.colom2=o.colom2 and n.colom3=o.colom3 and n.colom4=o.colom4 inner join ( select colom1, colom2 , colom3, colom4, [value] from new except select colom1, colom2 , colom3, colom4, [value] from old ) i on n.colom1=i.colom1 and n.colom2=i.colom2 and n.colom3=i.colom3 and n.colom4=i.colom4
common-pile/stackexchange_filtered
Rename macos subfolders in my home directory to lowercase I'm looking to change the casing of macOS's default folders (the ones that sit in my home folder) to lowercase. ~/myname/Desktop -> ~/myname/desktop ~/myname/Downloads -> ~/myname/downloads ~/myname/Music -> ~/myname/music If possible, it'd be great to avoid moving the home folder itself, and play with symlinks. Is it possible to do this? sudo mv ~/Downloads ~/downloads from terminal worked for me. I'm running Lion with default HFS+ settings (case preserving not case sensitive) - if you are on later version or different filesystem and it doesn't work edit results of diskutil info / into your question. @lx07 This seemed to work! Do you think it'll be dangerous to rename ~/Library, too? No it will be fine - as long as you don't have case sensitive file system. I tried it to rename to ~/library and it caused no issue. Thank you! It worked well, no problems so far :)
common-pile/stackexchange_filtered
plot solution that becomes multivalued Consider burgers equations $u_t + u u_x =0 $ with initial data $u(x,0) = e^{-x^2}$. The characteristics are given by $$ dx/ds = u, dt/ds = 1, du/ds = 0 $$ with initial curve $\Gamma = (r,0,e^{-r^2})$ So we have $$ t = s, u = k $$ where $k$ is constant. and $$ x = us + r $$ since $x(0)=r$. now since $u(x,t) = k = u(r,0) = e^{-r^2} = e^{-(x-ut)^2}$ So this is sol. Now, notice that when $t=0$ we obviously get our initial data $u(x,0) = e^{-x^2}$ and the characteristic line is $x= r $. Say now we increase $t=1$ we characteristic line $x=u+r$ and our solution becomes $$ u(x,1) = e^{-(x-u(x,1)^2)} $$ how can we plot this? I agree with your result : $$u(x,t) = e^{-(x-ut)^2}$$ This is an implicit equation. If you want explicitly $u(x,t)$ , then a special function is required, namely the Lambert W function. If you only want to plot $u(x,t)$ , using the Lambert W function would be complicated, but possible. There is a much simpler way. Plot $x(u,t)$ that is plot $x$ as a function of $u$ for various $t$. This is an explicit function : $$x(u,t)=t\,u\pm\sqrt{-\ln|u|}$$ Plot the two branches corresponding to the signs $+$ and $-$ successively. With rotation/symmetry you get the wanted orientation of the figure. Or alternatively, directly plot $x$ as a function of $u$ on the inverted system of axes. Note :The function $u(x,t)$ becomes to be multivalued at $t=\sqrt{\frac{e}{2}}$ (curve in red). $\frac{du}{dx}=\infty$ at $\left(u=e^{-1/2} \:;\: x=\sqrt{2}\right)$.
common-pile/stackexchange_filtered
Performance Issue with Spring BOOT 1.5.6 Performance Issue with Spring BOOT 1.5.6 We are using a spring boot java based REST API application where we have the below spring MVC async parameters. Under heavy load when the endpoint is tested the endpoint is returning API response avg of 30-50 seconds. This is happening when we have a sudden burst of 10 minutes. Our ideal time for the API response 75% percentile is between 1- 2 seconds. Below is the configuration, we are using 6 C5x large instances having 4 cores each/per instance. spring.mvc.async.properties.web.executor.minPoolSize=50 spring.mvc.async.properties.web.executor.maxPoolSize=100 spring.mvc.async.properties.web.executor.maxQueueSize=50 #Hikari Data source properties. spring.datasource.hikari.minimumIdle=25 spring.datasource.hikari.maximumPoolSize=90 spring.datasource.hikari.idleTimeout=600000 Appreciate for any scalability suggestions. Also we also identified in few calls that dB calls are taking time and we are trying to find out if anything need to be fined tuned in the query but I think the threads are waiting on the dB response.Also with async thread executor with policy as discard policy is there any chance of rejecting any task submitted ? Iam expecting the tasks to be queued instead of rejecting under load .we moved away from callerRuns policy to discard policy.Any thoughts on that or anything else required from spring boot side or from thread pool size execution side ? Thanks what is your regular load? and load during hike? how much time does hike stay? We have a initial burst of 10 seconds and requests load peaks up to 10k/minute. The hike stays almost for 10 -15 minutes and about 5 % of the calls fail as we have a 10 seconds SLA limit. I think as a first resort you should try to identify the bottleneck of this flow The key tool for this is metrics. I see that you use Hikari here and it exposes metrics automatically by its own. Maybe the Database works hard and it becomes a bottleneck, in this case, it will take a relatively long time to acquire a DB connection from the pool. Another possible issue can be if the actual requests to the service carry a lot of content with it (maybe its a "big file upload" operation, I don't know whether its a case, but still worth checking). So I suggest using metrics (built-in or custom). Spring boot has an excellent integration with Metering systems (Micrometer for spring boot 2 and dropwizard metrics for spring boot 1.x) Thnsks Mark for your reply. That’s correct Iam looking at the metrics exposed by micrometer with /metrics. Also we just modified to see how it works from bounded queue to unbounded queue by only specifying the core pool size to 100 and removing all the above mentioned custom tuning of the min pool size and max pool size. The DB connections we have right now are 90*6 instances. Under load it’s using only till 300+ connections. The service returns a payload between 1.8kb to max of 2MB.
common-pile/stackexchange_filtered
Namespace cannot directly contain members such as field or I get this error when I just add a class in visual studio community edition 2017 version. using System; using System.Collections.Generic; using System.Text; namespace CSU-app1 { class Class1 { } } Invalid identifiers Hyphen is not a valid character to use as part of a namespace name. Consider the problem the compiler would have when trying to distinguish between a hyphen as part of a name like CSU-app.Class1.PropertyName and a minus sign. Tips: pay attention to the line number where an error comes from - it may help you to diagnose the problem. And when asking a question about an error, always indicate which line the error occurred on - it may help others to diagnose the problem. Thanks, that works. Funny it worked when it was a single file. Namespace is a logical space i.e. It behave same as folders in localdisk C or D , such that it divides the space in drives but it does not have its own existence other than its name only , so curly brackets will show error in ur case.
common-pile/stackexchange_filtered
Heroku CSS file not updating In regards to a Facebook APP I am currently building, I changed some of the codes in the base.css file in Heroku, and when I use the commands: git commit -am git push heroku The page still looks the same after reloading, getting the message "Everything up-to-date". What am I missing/doing wrong? Location of the file is: stylesheets/base.css I also tried: git push heroku stylesheets but that didn't help could you post the output of git status? @MaxGherkins I did a git status and am seeing nothing added to commit but untracked files present <use "git add" to track> maybe try those: git add stylesheets/base.css, git commit -m 'stylesheet_or_whatever_message', git push origin master ... @MaxGherkins I also did a git add base.css, then status, showed it had changed yet when I git push heroku did not change. Should I be using the command git push heroku stylesheets or git push heroku base.css ? it's add, commit, push. If you've done the first two git status should tell you, that you're n commits ahead. I think if you cloned a heroku default app, the push syntax should be git push origin master @MaxGherkins I did all that, still not working. I see the words stylesheets/base.css in RED @MaxGherkins Something really bizarre is going on and I'll need to research this. In my index.php file is an http call to stylesheets/screen.css and in that file is @import url("base.css"); I suspect it being the problem. Instead, I just added another call to another stylesheet in my index.php file and content is ok. Scratching my head as to what the issue is with the @import url is. I think you might have a conflict in your base.css file. this would result in something like this, breaking the syntax: http://www.kernel.org/pub/software/scm/git/docs/v1.7.3/user-manual.html#resolving-a-merge. @MaxGherkins Looking into it now, thanks for the link and the help, cheers. I was having the same problem with Django on heroku. After following the standard git add, commit, push loop - the server's CSS was not updating. Everything, including static files, had been pushed to the server. Restarting the heroku server worked though: heroku restart Give that a go! For me it was Cloudflare caching my styles. For me too. The solution is to either go into developer mode or purge the cache. Both can be done through the dashboard in Cloudflare. I had this problem recently and in my case it was a case sensitivity issue. The page referred to the file as theFile.css but the file was actually thefile.css. On OS X with case insensitive file system it was working, but when put on Heroku is broke. Just in case this helps anyone. You should add each branch to commit: Example: git add css Then commit and push your Local CSS file will works fine. But if you use bootstrap or material-ui in your project then make sure you used CDN links in index.html file. After that it will work fine. Thank you I know this is an old thread, but I wanted to suggest two other options if none of the above answers are helping someone get changes to their CSS reflected on Heroku: If you are working in Django and your CSS files are stored in your static folder, depending on your configuration you may need to run python manage.py collectstatic on your local machine before git adding (git add .), git committing (git commit -m "collecting static files") and git pushing to heroku (git push heroku main or git push heroku master) Be sure if you are pushing to Heroku using git that you are pushing only from your main or master branch; I don't believe Heroku will accept other branch names.
common-pile/stackexchange_filtered
Step by Step instructions to install HDF5 on Windows I need a spreadsheet support in Tkinter. To install a module tkintertable available on GitHub, it requires a library called tables which in turn requires HDF5. For installing HDF5, I downloaded it's file below https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.10/hdf5-1.10.1/src/CMake-hdf5-1.10.1.zip for building it through CMake. After putting it in a folder I ran one of the batch files but it shows the following error: . Size of output: 0K [ERROR_MESSAGE] Error(s) when building project [HANDLER_OUTPUT] 1 Compiler errors 1 Compiler warnings Test project …../GUI/CMake-hdf5-1.10.1/build [ERROR_MESSAGE] No tests were found!!! Would someone mention step by step process to install hdf5 in windows, would appreciate it a lot.
common-pile/stackexchange_filtered
Why did someone downvote my question without reading it completely? I had asked this question today. Within 20 seconds someone had downvoted the question. Here's the proof: It's not possible for a person to read the whole question in 20 seconds. So why did they down-vote? Is it OK to down-vote like this? That question can be read in 20 seconds. (I was not the downvoter. But the downvoter did nothing wrong.) All disciplines have their specialised language i.e. terms that have precise meanings. Some of these terms my be unique to the discipline while others will be terms used in everyday life but repurposed to have a specific meaning that differs from the everyday meaning. This applies whether we are talking about physics or silkworm farming. To people skilled in the discipline it is quickly and easily apparent when someone is using terms they don't understand. Whether "quickly" means "twenty seconds" is debatable, but it doesn't take very long for a physicist to spot that in your question you are using physical terms in an inappropriate way. I'm not criticising you for this because no-one was born knowing physics, but it does mean that your question is meaningless at first glance and that is probably why it attracted the quick downvote. It requires some effort to read through what you have written and try to work out what you are actually asking. So the quick downvote doesn't indicate malpractice - just impatience. I think sometimes people without the "close vote" privilege down vote instead. I don't think that's a good idea as down votes are a good deal worse than close votes. @StephenG I disagree, most people care more about getting an answer than rep, and closing a question directly prevents you from getting an answer. @EkadhSingh Down votes are worse for the prson being down voted. Apart from the morale hit they take, they can, AFAIK, be banned if they get enough down votes to a question. I reserve down votes for really bad questions or where the OP has clearly declined to make needed changes. Closed questions can be reopened (typically if edits are applied) and enough people vote to reopen.
common-pile/stackexchange_filtered
Resuming for loop if JSONException occurs Im parsing the json response from foursquare. There are some json objects that are missing so I catched it. But how do you continue a for loop if an exception occurs; if (length > 0) { for (int i = 0; i < length; i++) { JSONObject venueObject = venues.getJSONObject(i); String id = venueObject.getString("id"); String name = venueObject.getString("name"); JSONObject location = venueObject.getJSONObject("location"); String lat = String.valueOf(location.getDouble("lat")); String lng = String.valueOf(location.getDouble("lng")); HashMap<String, String> venue = new HashMap<>(); venue.put("id", id); venue.put("name", name); venue.put("lat", lat); venue.put("lng", lng); String address = ""; try { JSONArray addressArray = location.getJSONArray("formattedAddress"); address = addressArray.getString(i); } catch (JSONException j) { address = location.getString("country"); } finally { venue.put("address", address); venueList.add(venue); //continue for loop somehow } } } When you say some objects are missing, would it be better to configure your mappings to ignore missing properties? Once you've mapped everything as Java objects, you can then validate as required. i can't ignore the address since it's required for my app. Since the location always has a country JSON object, im getting that as an alternative if the formattedAddress is missing You don't need to do anything - if a JSONException is thrown is thrown, the catch and finally blocks execute and the code continues executing as before, so the for loop continues. One thing though - if you expect some JSON objects to be absent, it would be better to check before trying to access them than to access them and catch the exception.
common-pile/stackexchange_filtered
X labels not right! pgfplots bar chart I am pretty new to pgfplots, and I am attempting to make a bar chart using, but the xtick labels are not appearing with respect to the bars. Code: \documentclass{amsart} \usepackage[utf8]{inputenc} \usepackage{graphicx} \usepackage{float} \usepackage{caption} \usepackage{pgfplots} \pgfplotsset{width=12cm,compat=1.13} \begin{document} \begin{figure} \captionsetup{labelformat=empty} \caption{What Do You Do When You Are Bored?} \begin{tikzpicture} \begin{axis}[ ybar, grid style=dashed, axis lines=left, enlargelimits=0.5, ymin=0, ytick={0,.1,.2,.3,.4}, bar width=1cm, xtick={18 \& under,19-64 ,65+,all ages} axis lines=left, legend style={at={(0.5,-0.2)}, anchor=north,legend columns=-1}, ylabel={Popularity}, symbolic x coords={18 \& under,19-64,65+,all ages}, xtick={18 \& under,19-64,65+,all ages}, nodes near coords, nodes near coords align={vertical}, x tick label style={rotate=45,anchor=east}, ] \addplot coordinates {(18 \& under,.314)}; \addplot coordinates {(19-64,0.2058333333333)}; \addplot coordinates {(65+,.18)}; \addplot coordinates {(all ages,.2255)}; \legend{Texting, Watching TV, Reading, Talking with friends} \end{axis} \end{tikzpicture} \end{figure} \end{document} This produces: However, as you can see the x labels are not anywhere near the bars. Any help would be great. As a side note, how would I decrease the amount of space between the x axis and the beginning of the bars? The problem with the xticklabels is essentially the same as in http://tex.stackexchange.com/questions/335126/how-to-center-labels-on-x-axis/335130#335130: use a single \addplot. For the spacing, instead of enlargelimits=0.5, use enlarge x limits=0.5, and optionally add enlarge y limits=<some value> if you need more vertical space. This is my solution, I added enlarge y limits and enlarge x limits and ybar=-1cm in the axis options. The first two commands set the x and y limits of the plot and the last one sets the space between the bars. I also included title=... inside the plot instead of using figure environment and adding caption. I hope it's what you were looking for. \documentclass{amsart} \usepackage[utf8]{inputenc} \usepackage{graphicx} \usepackage{float} \usepackage{caption} \usepackage{pgfplots} \pgfplotsset{width=12cm,compat=newest} \begin{document} \begin{tikzpicture} \begin{axis}[ title=What Do You Do When You Are Bored?, ybar=-1cm, % because of bar width=1cm axis lines=left, enlarge y limits=0.1, enlarge x limits=0.2, ymin=0, ytick={0,.1,.2,.3,.4}, bar width=1cm, legend style={at={(0.5,-0.2)}, anchor=north,legend columns=-1}, ylabel={Popularity}, symbolic x coords={18 \& under,19-64,65+,all ages}, xtick={18 \& under,19-64,65+,all ages}, nodes near coords, nodes near coords align={vertical}, x tick label style={rotate=45,anchor=east}, ] \addplot coordinates {(18 \& under,.314)}; \addplot coordinates {(19-64,0.2058333333333)}; \addplot coordinates {(65+,.18)}; \addplot coordinates {(all ages,.2255)}; \legend{Texting, Watching TV, Reading, Talking with friends} \end{axis} \end{tikzpicture} \end{document}
common-pile/stackexchange_filtered
QueryProperty on ViewModel gets assigned to variable too late in .NET MAUI? I am calling a page from my main page and I am using QueryProperty to pass a string. I want to evaluate this string and create a different list of objects depending on which string was passed so that I can reuse this same page for different things. The problem is that when EnterValueViewModel runs and I want to check my passed string, the value is still null. How can I run the if statement after the value has been assigned? [QueryProperty("TypeOfCaller", "TypeOfCaller")] public partial class EnterValueViewModel : ObservableObject { [ObservableProperty] string typeOfCaller; [ObservableProperty] ObservableCollection<string> listOfObjects; public EnterValueViewModel() { if (typeOfCaller == "PermitHolder") { listOfObjects = new ObservableCollection<string>(PermitHolderRepository.GetPermitHolders().Select(p => p.Name)); } else if (typeOfCaller == "Company") { listOfObjects = new ObservableCollection<string>(CompanyRepository.GetCompanies().Select(p => p.Name)); } else if (typeOfCaller == "Facility") { listOfObjects = new ObservableCollection<string>(FacilityRepository.GetFacilities().Select(p => p.Name)); } } I tried putting the if statement inside a method, but i don't know when to call it. I need an event that executes after the page is loaded but before the user can interact with it. the constructor executes before any properties are set. Since TypeOfCaller is observable, you can just attach a handler to it's PropertyChanged event and execute your code when that fires To execute code when an observable property changes, use one of the auto-generated methods for the property. https://learn.microsoft.com/en-us/dotnet/communitytoolkit/mvvm/generators/observableproperty Also, why are you using an ObservableCollection if you're going to recreate it every time? Just create the collection once and update it as required. public partial class EnterValueViewModel : ObservableObject { [ObservableProperty] string typeOfCaller; public ObservableCollection<string> ListOfObjects {get;} = new ObservableCollection<string>(); partial void OnTypeOfCallerChanged(string newValue) { ListOfObjects.Clear(); List<string> newObjects; if (newValue == "PermitHolder") { newObjects = PermitHolderRepository.GetPermitHolders().Select(p => p.Name)).ToList(); } else if (newValue == "Company") { newObjects = CompanyRepository.GetCompanies().Select(p => p.Name)).ToList(); } else if (newValue == "Facility") { newObjects = FacilityRepository.GetFacilities().Select(p => p.Name)).ToList(); } else { return; } foreach(string newObject in newObjects) { ListOfObjects.Add(newObject); } } } Thank you, I'm still wrapping my head around MVVM, but it is working now. @Piet Do you mean you can get the typeOfCaller value and run the if statement successfully now?
common-pile/stackexchange_filtered
Wordpress child theme for second installation We have WordPress installed for our main website with it's own theme. I now want to install another WordPress in a sub-folder where I want to use the main website's theme there. So I've made a new theme which includes 3 files: index.php style.css functions.php I'm referring to the main website theme files (header.php for example) in the new installation index.php file: <?php //Checking if file exist if ( file_exists( get_stylesheet_directory() . '/../../../../wp-content/themes/First/header.php') ) { //Require file if it exist, require_once( get_stylesheet_directory() . '/../../../../wp-content/themes/First/header.php' ) } else { /* Echo something if file doesn't exist, if the message wasn't displayed and you still get 500 error then there's some wrong on the php file above*/ _e('File not found'); } ?> However, this returns 500 error, anyways, Can I do something like this ? to use another WordPress installation themes on a new installation of WordPress in a folder? Please use register_theme_directory() function and use absolute paths. Please also see this: https://wordpress.stackexchange.com/questions/83102/how-do-you-change-the-theme-location should I use this function in my functions.php file ?
common-pile/stackexchange_filtered
How do I react to a QML button click in C++ I am trying to launch a different QML Page from my C++ code by hooking into the clicked() slot of a button in my QML but it's not working. Button { objectName: btnLogin text: qsTr("Login") id: btnLogin } And the c++ QObject *newButton = root->findChild<QObject*>("btnLogin"); QObject::connect(newButton, SIGNAL(clicked()), this, SLOT(loginClick())); The slots in my hpp file: public slots: void loginClick(); And my clicked method: void GConnectBB::loginClick() { int i = 0; Button *newButton = root->findChild<Button*>("btnLogin"); if (newButton) newButton->setProperty("text", "New button text"); } QObject *newButton = root->findChild<QObject*>("btnLogin"); Is null when I check through the debugger. I am extremely rusty with C++ and completely new to Qt, please be gentle :) What could I be doing wrong? Isn't this considered a bad practice (access UI elements from c++)? I've been trying myself to learn how to connect my c++ object method to a qml object signal... @JoaoMilasch yes it is. This was just an example. The primary goal was to get the C++ code to react to the button click. You should surround the object name with quotation marks: Button { objectName: "btnLogin" ... ... } I guess this mistake comes from the fact that the id property doesn't have quotation marks.
common-pile/stackexchange_filtered
Ember Model get is not a function Why sometimes in setup controller on my product route, model.get('property') works and other times I have to retrieve properties with model.property It throws an error model.get( is not a function... Why is this happening, clues? Details : Product Route - model: function(params) { return this.store.find('product', params.product_id); }, setupController: function(controller, model){ this._super(controller, model); var type = model.get('publisher').get('entity_type'); } Index Route - model: function(params){ return Ember.RSVP.Promise.all([ this.store.find('groupedItem', { group_code: 'HOME', position : 1 }), this.store.find('groupedItem', { group_code: 'HOME', position : 2 }), ]) } What's your model hook look like? Can you also post the part where you're calling model.get()? You posted the setupController for a different route. Problem seems to be with POJOs, I don't know exactly when do I get a POJO from Ember Data local store, so I ended up retrieving values from model with Ember.get(model, 'attribute') I already fix my issue temporary, but would like to know why is this behaviour... You're calling an asynchronous method in your setupController hook which Ember isn't expecting. You're probably trying to call model.get() before you've actually placed the model on the controller. That kind of asynchronous operation should be happening in the model() hook, not in the setupController() hook. model: function() { return Ember.RSVP.Promise.all([ this.store.find('groupedItem', { group_code: 'HOME', position : 1 }), this.store.find('groupedItem', { group_code: 'HOME', position : 2 }) ]); }, // This is actually the default behavior of this method // So you don't have to override it if you don't want to setupController: function(controller, model) { // The `model` is your `values` controller.set('model', model); } thnx for the advice, but do you know why sometimes Ember data returns POJO and sometimes an Ember Object? Ember Data would always return an Ember object unless you've overridden something. Can you show an example of code that doesn't return one? RSVP.Promise returns POJOs for all hash etc That's not really Ember Data. RSVP is just an ES6 Promise polyfill. EDIT: I thought you were OP. Obviously you know that. :p
common-pile/stackexchange_filtered
How to get values from an object without the object name in Javascript The question sounds weird I know. But its a weird question. Let me clarify. Im using the facebook graph api. The feeds for my page are returned in JSON and i have this bit of JSON here: "message": "A3Media Uk Website is fully up and running! Tell your friends, We can't make Beautiful Websites without clients!\n - Alex Morley-Finch", "message_tags": { "116": [ { "id": "514033508", "name": "Alex Morley-Finch", "offset": 116, "length": 17 } ] } So from my knowledge, there is an object called message_tags which contains and array of objects called 116, and index 0 of this array contains and object with the variables id, name, offset and length. Now what i want to do is, obviously replace the text "Alex Morley-Finch" within the message variable with the tag "name": "Alex Morley-Finch". Then using the id, length and offset i can replace the text with a html link to that profile using the ID! This all seem pretty simple, however, i obviously want my code to be dynamic so the code will work for ANY tag at ANY position. The name of the object array "116" always matches the offset contained inside it. The actual question: How can I dynamically get the name of the object Array (in this case '116')? Because my code would be something like (pseudo code): if message has tag get name of tag if message contains name of tag replace message name using offset and length with html link tag with href = facebook url / id end if end if This would leave me with my html representation of the "message" The thing is i cant get the name of the message_tag because id have to do somthing like: // data[index] represents the current message var json = JSON.parse(XmlHttpResponse.responseText); json.data[index].message_tags.116[0].name; as we can see this is not dynamic. This code will ONLY work for this tag. So how do i get the name without referencing 116? is it even possible? i was thinking about trying iterating through 1, 2, 3, 4, 5 --> 116 but that would be very performance costly AND bad coding AND im not even sure if you can reference arrays through variable names... ... ... Im really stumped. Please Help! Alex You can simply iterate over the properties of an object with a for...in loop. Side note: Your problem is actually not related to JSON but how to access properties of a JavaScript object. Follow the link (for...in loop‌​ is a link). but surely id still need to use the "116". for (var s in 116){} No, you iterate over the properties of the object in message_tags. This will loop through all of the tags. You might want to add code to check it's the one you want to change in case there are multiple tags. for (var tag in message_tags) { message_tags[tag].name = "..."; } For..in solution Use the for..in javascript statement to traverse your object properties and then act upon those results.
common-pile/stackexchange_filtered
Not able to start MongoDB instance - Starting database mongod [fail] I downloaded MongoDB on my WSL on Win 11, following the step by step from the Microsoft website. But when I run the command: sudo service mongodb start It returns this: Starting database mongod [fail] I don't know what to do, I've tried several things. I already uninstalled and installed again. I expected Mongo to boot normally, but it doesn't. I can't even get into the Mongo shell. When I run mongo, it returns this: mongo Command 'mongo' not found, did you mean: command 'mono' from deb mono-runtime (<IP_ADDRESS>+dfsg-3.2) Try: sudo apt install <deb name> So if I look in my packages, I have it installed: dpkg --list | grep mongo ii mongodb-database-tools 100.7.0 amd64 mongodb-database-tools package provides tools for working with the MongoDB server: ii mongodb-mongosh 1.8.0 amd64 MongoDB Shell CLI REPL Package ii mongodb-org 6.0.5 amd64 MongoDB open source document-oriented database system (metapackage) ii mongodb-org-database 6.0.5 amd64 MongoDB open source document-oriented database system (metapackage) ii mongodb-org-database-tools-extra 6.0.5 amd64 Extra MongoDB database tools ii mongodb-org-mongos 6.0.5 amd64 MongoDB sharded cluster query router ii mongodb-org-server 6.0.5 amd64 MongoDB database server ii mongodb-org-shell 6.0.5 amd64 MongoDB shell client ii mongodb-org-tools 6.0.5 amd64 MongoDB tools I do not know what to do, Can someone help me? Tried any of these: https://stackoverflow.com/questions/9884233/mongodb-service-is-not-starting-up https://stackoverflow.com/questions/18524925/mongodb-service-wont-start https://stackoverflow.com/questions/60309575/mongodb-service-failed-with-result-exit-code regarding mongo not found https://stackoverflow.com/questions/73081708/mongo-exe-not-installed-in-version-6-0-0/73084403#73084403. Starting database mongod [fail] - you should provide logs
common-pile/stackexchange_filtered
Regex whitespace (\s) handling all spacing types I'm very new to these, but have a regular expression/replace function: string.replace(/\s{10,}/gi, ' '); Because I have a text string that is out of control with a combination of white-spaces, tab spaces, and line-breaks. The problem I am facing is that the above expression handles too much. I have tried dialing it back to \s{1,} to alleviate this, but it reduces even line-breaks and tab-spaces down to a single space. I would like to handle those separately with different rules. It seems like this rule is overriding anything I try to handle the other spacing types. What exactly do you want your regex to accomplish? Do you want to replace just simple blank spaces? Please specifically tell us what you want your regex to do. Give examples. If you want a certain group of characters parsed, use the [] syntax; put each character you want in the brackets. string.replace(/ +/g, ' '); Specify to us which set of chars do you want to replace. Give us some examples of original string and expected resulting string. The { } notation says how many spaces to replace. When you use the form {n,} that means "replace all matches of at least n repetitions, up to any number". The "g" modifier on your regular expression says to carry out the replacement for every match. How about just formatting the string properly to begin with ? That is the whole point of \s: To handle any whitespace. See Shorthand Character Classes for more information. You can use an actual space character to match spaces only: var spaces = / /g; // a valid regex For other types consider these: /\t\r\n/ // other space characters
common-pile/stackexchange_filtered
Way for my texture to emit light/glow on the Eevee engine So, im working right now on a scene which ends with this red Ruby, im tryning to find a way for it to glow but keeping all its details, already went with the alpha map trick but doesnt get the finish version i want, its there a step im missing or there's another form?? In Eevee you have a very fast bloom function: However that makes everything pixel bloom as long as it's brighter than the threshold. If you want to make single objects bloom you need to mask them some way and then use the mask for a glow effect in the compositor (post render). One way is to use object pass index.
common-pile/stackexchange_filtered
Is std::format going to work with things like ICU UnicodeString? Rather than a long preamble, here is my core question, up front. The paragraphs below explain in more detail. Is there a template parameter in std::format (or fmt) that will allow me to format into ICU UnicodeStrings?, or perhaps into something like char16_t[] or std::basic_string<char16_t>, while using a unicode library to deal with things like encoding and grapheme clusters? More Explanation, Background I see the C++20 standard has this std::format library component for formatting strings. (It's late in 2022 and I still can't use it my compiler (clang from Xcode 14), and I'm curious about the cause of the delay, but that's another question.) I've been using this fmt library, which looks like a simpler preview of the official one. int x = 10; fmt::print("x is {}", x); I've also been using ICU's UnicodeString class. It lets me correctly handle all languages and character types, from ASCII to Chinese characters to emojis. I don't expect the fmt library to aware of Unicode out of the box. That would require that it build and link with ICU, or something like it. Here's an example of how it's not: void testFormatUnicodeWidth() { // Two ways to write the Spanish word "está". char *s1 = "est\u00E1"; // U+00E1 : Latin small letter A with acute char *s2 = "esta\u0301"; // U+0301 : Combining acute accent fmt::print("s1 = {}, length = {}\n", s1, strlen(s1)); fmt::print("s2 = {}, length = {}\n", s2, strlen(s2)); fmt::print("|{:8}|\n", s1); fmt::print("|{:8}|\n", s2); } That prints: s1 = está, length = 5 s2 = está, length = 6 |está | |está | To make that width specifier work the way I want, to look nice on the screen, I could use ICU's classes, which can iterate over the visible characters ("grapheme clusters") of a string. I don't expect std::format to require Unicode either. From what I can tell the C++ standard people create things that can run on small embedded devices. That's cool. But I'm asking if there will also be a way for me to integrate the two, so that I don't have a split world, between: C++'s strings and format. ICU strings if I want things to look right on screen. std::format() only supports char (std::string) and wchar_t (std::wstring) data. Interesting. Some of the other types in the library, like basic_format_context, have a template argument for the character type. Yes, but that does not mean it is specialized for char16_t/char32_t, though. In fact, it is not, it is only specialized for char/wchar_t only. Most things in the standard library that deal with text are like that. There is actually very little support for char16_t/char32_t in the standard library. I just found this. https://lists.isocpp.org/mailman/listinfo.cgi/sg16 It looks like people are actively working on it, which I'm happy to see. There a link to github there with more info, papers. {fmt} doesn't support ICU UnicodeString directly but you can easily write your own formatting function that does. For example: #include <fmt/xchar.h> #include <unistr.h> template <typename... T> auto format(fmt::wformat_string<T...> fmt, T&&... args) -> UnicodeString { auto s = fmt::format(fmt, std::forward<T>(args)...); return {s.data(), s.size()}; } int main() { UnicodeString s = format(L"The answer is {}.", 42); } Note that {fmt} supports Unicode but width estimation works on code points (like Python's str.format) instead of grapheme clusters at the moment. It will be addressed in one of the future releases. That doesn't compile for me. I think because wchar_t is 32 bits on my platfrom. Is it 16 on yours? This is just an example based on https://unicode-org.github.io/icu-docs/apidoc/released/icu4c/classicu_1_1UnicodeString.html#a16a50d4b0452adbbf960d63059362f07. You could use char16_t instead by replacing fmt::wformat_string<T...> with fmt::basic_string_view<char16_t> and L"..." with u"...". Cool, that works. One more followup question... is there a way to define a formatter specialization to allow std::string or std::u8string as args, without getting the "Mixing character types is disallowed." error? Eg, format(u"{}", utf8Str). I tried defining a specialization like template <> struct fmt::formatter<std::u8string, char16_t> but it seems to have no effect. {fmt} disallows implicit mixing code unit types because the latter involves transcoding. You can do it by providing a formatter for a wrapper type that will make transcoding explicit.
common-pile/stackexchange_filtered
On exit event for Windows Phone 8 How can I register an OnExit Event for my Windows Phone 8 app to do some tasks before user exits Application. I tried using: Application.Current.Exit But Visual Studio says it wont be called for Windows Phone. Any Help? Thanks You have a PhoneApplicationService.Closing event. In the autogenerated App.xaml.cs you should already have a hooked-up Application_Closing handler. However, you should not use it, as: If your app is terminated by the operating system while it is not in the foreground, the Closing event will not be raised. Your app can rely on receiving the Deactivated event, but not Closing. For this reason, you should perform all essential cleanup and state maintenance tasks in the Deactivated event handler. Put your tasks in the auto-generated Application_Deactivated method. Hi. I am not having any Application_Closing handler by default. So I tried registering one with PhoneApplicationService but there was no Closing event there. So I added one with PhoneApplicationService.Current. But now its not getting called. Neither in PhoneApplicationService.Current.Closing event nor in PhoneApplicationService.Current.Launching event. Help? Your App.xaml should have: <Application.ApplicationLifetimeObjects> <shell:PhoneApplicationService Launching="Application_Launching" Closing="Application_Closing" Activated="Application_Activated" Deactivated="Application_Deactivated"/> </Application.ApplicationLifetimeObjects> Will it be called even if I call Application.Current.Terminate() ? @PratPor I never used Terminate, so you will have to check that yourself. But I would advise you not to exit the application programmatically, but to rely on standard Windows Phone navigation model. I'm assuming this is a Windows Phone 8.0 Silverlight Application since you didn't state otherwise. In your App.xaml.cs there should be an Application_Closing() event. You'd want to run these tasks here. Just beware as you only have a few seconds to run code before the OS kills your process.
common-pile/stackexchange_filtered