Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Installing and using Ubuntu 13.10 have some pitfalls. This is a diary of my installation attempt. Although I consider my installation as merely stable, and I’ve already put all the necessary files and configs upon, I am not sure, whether this installation is final.
Identified and solved Problems with Ubuntu 13.10
– Dislplay only works, when the following Kernel parameter is set: acpi_backlight=vendor
– Kernel module ideapad_laptop is loaded automatically, which makes the WLAN and Display brightnes keys not working. Blacklisting or rmmod-ing the Kernel module works.
Resizing Windows Partition
As described, using Windows 8.1 is not my cup of tea. Resizing the Windows partition was merely an adventure, because I could not find the Computer Manager I was used to from Windwos XP and 7. How ever, got to the Classic Desktop, right click the Windows (former Start) symbol in the lower left corner, I found the Computer Manager. On the device manager, the Windows partition was able to be resized down to half of the 256GB SSD.
Plug-in the Ubuntu USB stick before you proceed.
To open BIOS/EFI settings, you need to power down the system and start it up with the separate very tiny key right to the power key.
In BIOS settings, disable FastBoot and change the boot priority to start up from USB first.
Booting Ubuntu Installer from USB
While having the Grub menu from the USB stick on display, press e to edit the Grub options. On the Kernel load line, add ‘acpi_backlight=vendor’ before the ‘quiet’ option. Press F10 to load these settings – don’t forget to push the Fn key to get F10.
Before Installing: WLAN and Display Resolution
Use the Try Ubuntu First mode to change some settings first before you install Ubuntu to the disk.
Ubuntu uses maximum resolution by default. However, 3400×1800 on 13.3″ is too high dpi to really use it. Use the Display Settings to switch back to Full HD 1920×1080.
To enable Wifi, unload the ideapad_laptop kernel module by typing:
sudo rmmod ideapad_laptop
Give it some time, then you can select your Wifi in network manager as usual.
Use gparted to reduce the Windows partition to the desired value.
Then, install Ubuntu as usual.
Starting Up the First Time
While rebooting into your newly installed Ubuntu, re-add the kernel option ‘acpi_backlight=vendor’. Since Grub is using the full resolution and is doing a full refresh on every keystroke, Grub is practically unusuable. Have patience.
Grub Resolution and Kernel Tweaking
On you newly bootet Ubuntu installation, change /etc/default/grub with the following:
GRUB_CMDLINE_LINUX_DEFAULT=”acpi_backlight=vendor quiet splash”
Use update-grub to have those settings persistent.
To have wifi and screen brightnes buttons working, black list the ‘ideapad_laptop’ kernel module. In /etc/modprobe.d/ I created a file “blacklist-ideapad_laptop.conf”, and in it, I put “blacklist ideapad_laptop”.
Like before, display resolution of 1920×1080 seems to be very comfortable and cozy on this ultrabook. Using the full resolution and higher the font sizes does not work like expected. On Gnome3 Flashback, the top panel is empty afterwards.
Unresolved Problems so Far
– Touchpad is way too sensitive, unable to press a button without moving the cursor
– Using full resolution
– Switching Function Keys / Multimedia Keys
– Rotating the display does not rotate the touch screen. Like this, tent mode is not really usuable.
I was relying my installation attempt on the following sources:
Thank you so much for your helpful descriptions!
|
OPCFW_CODE
|
As you are probably aware by now a number of critical CPU vulnerabilities have recently been discovered dubbed Meltdown and Spectre. If you haven't already done so you may want to visit https://meltdownattack.com/ for a high level overview of these issues. Long story short: These vulnerabilities can be used by attackers to read memory to which they would otherwise not have access (even of another VPS running on the same host system). Sensitive information such as private keys and passwords is therefore at risk.
Especially critical is the Meltdown variant to which all CPU's made by Intel are vulnerable. A mitigation has been made available which works around this issue by splitting up user and kernel page tables using a feature called page table isolation (PTI). We have deployed kernel updates containing this feature to all Tilaa infrastructure and made changes to our platform to expose the PCID CPU feature to a VPS which will limit the performance hit caused by PTI, though despite that some reduced performance is to be expected.
The two Spectre variants seem to be more difficult to successfully exploit but are unfortunately also more difficult to fix. The full impact of the Spectre vulnerabilities is still not completely known and work to mitigate Spectre attack vectors is still in progress by developers world-wide, including Intel and the Linux community. The fixes that have been made available for known attacks have been tested and deployed, but we expect more updates to become available over the next couple of months and will evaluate and deploy them as soon as we can.
It's difficult to determine the performance hit caused by the mitigation patches since it depends on the OS and the specific workload running on the VPS, but worst case you can expect to see between 10% and 30% performance loss.
Unfortunately there isn't much we can do about that except hope that future patches will restore some of the lost performance. You might have to (temporarily) upgrade the CPU's of your VPS to be able to restore its performance to previous levels.
Due to the high risk of exploitation of these security bugs we have to patch these issues as soon as possible. Unfortunately this will cause you downtime, because this requires reboots of our host servers (to load a PTI enabled kernel and new CPU microcode) as well as the VPS's (to enable PCID). You can expect to see about one hour of downtime for each VPS.
The reboots will take place over the course of next week (January 15th - January 19th) during working hours starting Monday morning (9:00 to 18:00 in the Europe/Amsterdam timezone). Our whole team will be standby throughout the week to help out with any issues that might surface.
Due to the complexity of this large operation it's unfortunately impossible for us to exactly determine when each host server will be rebooted. We will send out a maintenance notification before each host server reboot so that you are informed when the maintenance each specific VPS is starting. We will try our best to prevent nodes of high availability clusters to be rebooted simultaneously.
We understand that rebooting your VPS during working hours is inconvenient and we are doing our utmost to ensure you experience as little inconvenience as possible during this process.
We trust we have informed you sufficiently. If you have questions, please contact our support department. You can reach us by phone, email and through our social media channels. Thank you for your patience and understanding.
All hosts have been rebooted and the maintenance has been completed. Fortunately the performance hit seems to be barely noticeable and the updates seem to be stable.
|
OPCFW_CODE
|
Default Text editor of mac is good but after some time everybody is getting bored by using the same and old text editor for their programming Purposes. There are many Text Editors Which are available for the mac. Some of them we have added into our List of Best text editors for mac. But what is one want to use the Notepad++?
As we already know about the notepad++ Its one of the Best Text Editor which is only available for the windows users only. There are some Guides from which you can easily use the notepadd++ on your MacBook. Apart from this, You can have a look at the alternatives of notepad++ For mac which we have added here.
Before coming to the main part, Lets read about notepad++ that what is notepad++?
What is Notepad++? And Why We need Alternative for mac?
Notepad++ is a text code editor which is currently running on WIN32 and WIN64 bit Programs. Currently, Notepad++ is not available for the Mac users nor we have any news about it that when it will come for the MacBook users. If We talk about the features of notepad++ then it comes with syntax highlighting, search option, autocompletion of the codes, find and replace option and PCRE (Perl Compatible Regular Expression).
It’s free and open source based program for the windows programmers it was first released in 2003 by Don Ho. Its based on C++ program and written in that only. Unfortunately, If You are a Mac user you won’t be able to download notepad++ in your MacBook easily.
Wait? Don’t be worried there are some alternatives which work perfectly as an alternative to notepad++ for the Mac Users. Let’s check them out.
5 Best Notepad++ Alternatives For Mac:
Notepad++ No doubt its one of the Best Text editor available on the internet but if you are on the Mac OSx then, Unfortunately, the notepad++ is not available for you. Still, Let’s have a look at other alternatives to Notepad++.
Brackets is a Text editor which has been developed by Adobe. It’s one of the Free Text editor available to use and a competitor of notepad++. If we talk about the features of Brackets then It supports Syntax highlighting, Instant search option, Find and replace option. Moreover, It also supports the Free Plugins and Extensions in the database to enhance the performance of brackets.
Most of the text editors are comes under $$ but this is one of the Text Editor which has been developed by the Top Company Adobe and still its available for free to use to the programmers. It supports more than 1 Languages and if You are beginner then I will especially recommend you to give a try to the brackets. Brackets is a perfect alternative to Notepad++.
You might have seen the Paid version of Komodo IDE (Integrated Development Environment), Which is Specially used for Programming Bigger codes and used by most of the professional Programmers. Recently the Lite version of Komodo Edit has been released and its available for free to all the users over the internet.
One of the major drawbacks of Komodo edit is It lacks many of the important features which one is looking for. Still, It has all the basic features available in the Lite version of Komodo Edit which makes it a Perfect text editor for your programming.
In komodoEdit You can get the number of plugins, Theme and Github repositories to customize your Text editor according to your own preference. Still, Its available to download for free and there is nothing wrong in giving it a try.
JEdit is one of the Open Source Based Text Editor for the programmers it is maintained by over thousands of professional programmers over the world. It’s Free software but trust me you are gonna getting Awesome features if you choose JEdit over others. One of the Best features of JEdit is Unlimited Clipboard, Means there are no limitations for copying any text, also you can quickly return to the markup position.
If You see JEdit is like a normal text editor then there is nothing much in it but This Editor is maintained by Professional Editors and if You join their community you can learn even more about the programming. JEdit is available to download for free you can download it from given below download link.
You may also Like to Read: Best Browsers for Mac
Don’t be confused by its name it is not a Detergent bar. It is the Free Text editor. if You are one of them who is looking for just a basic text editor then Vim is For you. It has almost all Basic features that a beginner need. We can say its a Clone of Unix VI Editor.
Still, VIM can be considered as one of the Best Alternative for Notepad++. If we talk about the features then it comes with error detection, syntax highlighting, customizability. One of the best features of Vim is you can even use it in your mobile device as it also supports Mobile UI.
Last, What more you need in a free text editor? You can download Vim from below given download link.
This text editor we have already mentioned in our Previous Post about the Text Editors. The reason we are mentioning it here because Sublime is the only one Text editor who can give a real competition to the notepad++. It’s Paid Text Editor costs around 70$. It comes with syntax highlighting with more than one languages, Smart text editor and Search bar, and many other options. Furthermore, You can even customize it according to your own preference. You can take the help of available themes and Extensions for customizing Sublime Text.
One of the Best Part of Sublime Text is Its available for every platform. Means, Even if You are not a Mac user you can still use it on other platforms like Windows and Linux. Sublime Text is Available to download for 70$. However, You can take a trial before purchasing it.
There are only a Few Alternatives to Notepad++ Which can give a Real competition to it. The Last Text editor which we have mentioned in the post is the Perfect alternative but only if its comes under your budget.
Furthermore, There is a discussion running on for launching a notepad++ for the Mac OSX. It would be the great success of the notepad++ if they do so. Because we are getting almost all the good features that we are looking into the premium softwares. Still, if You are left with any doubts you can ask us in the comments section.
|
OPCFW_CODE
|
You can find the Save As function on the File menu as well as the iManage tab. Save As enables you to save your existing document as a new document to iManage Work.
Using the iManage tab
On the iManage tab, select Save As. The iManage Save As dialog box appears.
You can perform the following tasks using the Save As dialog box:
Searching for folders
The Open dialog box displays a list of recent matters by default. You can change this to a list of Recent Clients or Recent Folders by selecting from the panel on the left.
Use the navigation buttons to go back , forward , or up one container level . Use the search box to search for a specific location where you want to save your document to (for example, a workSpace or a folder).
For more information about using the search box, see Using the search box.
Creating a new folder
To create a new folder for saving the document:
Select Recent Matters and then double-click the desired matter. Recent Matters displays a list of all matters that contain a document or email you have created, opened, viewed, or modified on in the last 30 days . The list of folders in the matter is displayed.
Select the New Folder button located above the list of folders.
In the Folder Name field, enter the name and select Create. The folder is added to the list of folders.
Filtering search results
The Filters function enables you to sort search results. The filters vary depending on the search criteria.
To apply filters to the search results:
Select Filters .
Select the category of filter that you would like to apply to expand the list of options. For example, Date.
Select one or more filter criteria from the options provided.
For more information on filter criteria, see Filtering content.
Displaying the documents in a folder
By default, the Save As dialog box displays only containers. To view the documents within a container, select .
Saving a Document
After navigating to the desired container, save your document by entering or editing the name of the document in the Description field in the Properties panel displayed on the right.
The Properties panel also enables you to change other metadata and security, such as:
The Show More button enables you to view and modify additional document properties such as Client and Matter number, and so on. You can also modify the Security (see Modifying security).
Using the Backstage view
The iManage tab in t he Microsoft Office Backstage view enables you to view, open, and manage all your iManage documents, and to save new documents to the desired iManage location . The iManage containers are sorted per different groups based on your recent activity, and you can select any item in the lists for quick access to the desired document. For example, the Offline option under Documents and Matters enables you to view and open the documents and matters that you recently accessed in offline mode.
Select the File tab to display the Microsoft Office Backstage view.
Select Save As, and select iManage. By default, the Folders tab is selected and lists the recent folders that you accessed, and the matter in which they reside. Alternatively, select the following tabs:
Matters: Lists the recent matters that you accessed with the Recent Matters list displayed by default. Recent Matters displays a list of all matters that contain a document or email you have created, opened, viewed, or modified on in the last 30 days . Select My Matters or Offline to view the other matters lists.
Browse: Displays the iManage Open dialog box with the Recent Matters tab selected by default.
Select an item in these list. The iManage Save As dialog box displays the contents of the selected container. Navigate to the desired folder and select Save.
Alternatively, you can use the Save option in the main File menu in Office. This also displays the iManage Save As dialog box when saving a new document for the first time.
If you edit an existing document within iManage Work, selecting the Office File > Save option saves your changes locally. After closing the document, you are prompted whether you want to save the document or discard your changes. If you select Yes, changes are saved to iManage Work and the existing version is replaced.
|
OPCFW_CODE
|
We seem to have a number of problems.
1) Simply, we're out of RAM. We have been for awhile, but now Linux is trying to do lots of silly things like sending a full gig to swap and then panicking while every site I run basically dies as it tries to swap what it needs back in but can't because it really likes to keep about 10% free just in case because.
I -think- I've temporarily solved that issue by restarting most of our server's services (and then because of #3, reboot the server anyway : /). But this is just going to get worse, and more frequent, until we get more RAM. I'll be posting an announcement about this here and on Blue Moon soon.
2) Our I/O situation could be better on its own.
- I should have run the mailserver off the slave server from the beginning.
- I should never have used a RAID. At all.
Regrets : /
I could setup another server 'properly' fairly cheaply (a few hundred), though Elliquiy would go down a couple of times. An SSD would be a far more dramatic improvement, but they are expensive, and I don't want to go that route until Wheezy (next version of the Linux distro I use) is stable.
3) Something else is causing random load spikes and I honestly have no idea what. Some of it, I know, is measures I took to help correct the first issue, but that isn't all of it. There are about a thousand connections to E and other sites I run open at any one time, but the server can and has handled a hundred times that in load testing. File descriptors are a similar story. It's also possible that after 200 days other things happened causing Linux to slowly degrade. 200 days is a long time to be serving millions of pageviews and handling tens of millions of database calls per day.
I just rebooted the server. If that doesn't solve #3 here (besides what I know I caused to mitigate the above) we have Problems : /
- So far, so good, though. A few seemingly random mysterious issues no longer exist. Which is good, when it comes to computers >_> Rebooting still bad : / E-penis size is measured in server uptime.
4) We do occasionally top out CPU usage, but this is incidental and extremely momentary. With the AJAX chat off of E, we tend to peak at about 50% usage.
- Getting another CPU might allow us to turn the AJAX chat back on again, but with our RAM situation so fragile I'd like to get that taken care of, first - or alongside it.
|
OPCFW_CODE
|
Python Storlet Writing and Deployment Guideline¶
This is the Python specific storlet writing and deploying guide. This guide complements the more general guide for writing and deploying storlets which should be read first.
A python module implementing a storlet looks like this:
class <Class name>(object):
def __init__(self, logger):
self.logger = logger
def __call__(self, in_files, out_files, params):
The function called for storlet invocation
:param in_files: a list of StorletInputFile
:param out_files: a list of StorletOutputFile
:param params: a dict of request parameters
Below is a class diagram illustrating the classes behind the in_files, out_files, and logger. The diagram lists only the methods that the storlet writer is expected to work with.
The StorletInputFile is used to stream object’s data into the storlet. StorletInputFile has the same read methods as python FileObject. Trying to write to a StorletInputFile yields NotImplemented error. Whenever a storlet is invoked, an instance of this class is provided. To consume the metadata call the StorletInputFile.get_metadata method.
The StorleOutputFile is used for writing the storlet output. StorletOutputFile has the same write methods as python FileObject. Trying to read from a StorletOutputFile yields NotImplemented error. Whenever a storlet is invoked, an instance of this class is provided. Use the StorletInputFile.set_metadata method to set the Object’s metadata. Note that the storlet must call the StorletInputFile set_metadata method. Moreowver, StorletInputFile.set_metadata must be called before writing the data.
StorletLogger. The StorletLogger class implements the same log methods as the Python logger.
When invoked via the Swift REST API the __call__ method will be called as follows:
The in_files list would include one or more element(s) of type StorleInputFile representing the object appearing in the request’s URI (and possibly extra resources).
The out_files would include a single element of type StorleOutputFile representing the response returned to the user.
The parameters is a dictionary with the execution parameters sent. These parameters can be specified in the storlet execution request.
A StorletLogger instance.
Deploying a Python Storlet¶
Below are specific guidelines for deploying a Python storlet:
The object name of the python module containing the storlet class implementation must end with .py
Any python modules that the class implementation is dependent on should be uploaded as separate .py(s).
The ‘X-Object-Meta-Storlet-Main’ metadata key should be of the form: <module_name>.<class_name>. For example, if the storlet name is SimpleStorlet and it resides in simple_storlet.py, then the ‘X-Object-Meta-Storlet-Main’ metadata key should be “simple_storlet.SimpleStorlet”
Deploying a Python Dependency¶
Currently, there is no limitation as to what is being uploaded as a dependency.
|
OPCFW_CODE
|
This is part III of Big Data Overview Blogs for developers:
In part I, I covered the basics of big data and started with Ambari introduction and in part II, I talked about Hadoop technologies used for ingestion like Sqoop, Flume and Atlas. If you have not already read those articles, it would be helpful to read them before proceeding to reading this to understand the flow.
In this article I will talk about all technologies in Hadoop ecosystem that can be used to access, transform, or analyze data. In short, technologies encompassing the ‘data’ in big data.
Hadoop can store data from multiple sources and in both structured and unstructured form. Hive is used to query this data using SQL queries. Hive creates table similar to RDBMS tables for the data in HDFS and user or analysts can query these tables to understand and explore the data. Metadata for these hive tables is stored in metadata table. Hive data in form of tables are stored as corresponding HDFS directories within one database directory. Each of these table directories contain files containing the data. If the data is partitioned, there are subdirectories within this table directory and each partition directory has its files. Data within partitions can further be divided into buckets.
Hbase is NoSQL column-oriented distributed database which runs on top of HDFS. It is modelled after Google’s BigTable. It provides real time read/write access to large datasets stored in Hadoop. Hbase is well suited for multi-structured or sparse datasets and can scale linearly to handle table worth billions of rows. Hbase data is stored as tables of rows and columns. Each table must have an element defined as Primary key which is used for all access calls made to this table to retrieve data.
Pig is a platform to analyze the large datasets in Hadoop. It consists of two components: one is the programming language called PigLatin and other is the runtime environment where PigLatin scripts are executed. Pig excels at describing data analysis problems as data flows. Pig can ingest data from files, streams or other sources using the User Defined Functions (UDF). Once it has the data, it can perform select, iteration, and other transforms over the data. Again the UDF feature allows passing the data to more complex algorithms for the transform. Finally Pig can store the results into the Hadoop Data File System. Pig translates scripts written in PigLatin into a series of MapReduce jobs that are run on the Apache Hadoop cluster.
Fast, reliable, fault-tolerable publish-subscribe messaging system. Main components of Kafka are topics, producers, consumers and brokers. Topics are the categories to which messages are published. Producers publish messages to one or more topics. Consumers subscribe to one or more topics and consume message in sequential order from within a partition. Topics can contain one or more partitions and writes to partitions are sequential. Brokers are servers which track messages and manage the persistence and replication of messages. Kafka consumer can consume messages from an earlier point in time as well since Kafka retains messages on disk and for a configurable amount of time.
Storm is a framework that provides real time processing of streaming data. Storm is extremely fast (can process millions of records per second per node in a cluster of moderate size) and is scalable, fault-tolerant, reliable and easy to operate. I storm, data is passed as streams of tuples originating from spouts, hopping multiple bolts and producing output stream. This entire network of spouts and bolts in a storm system is called a topology. Storm users define topologies and data is processed through spouts and bolts based on defined topology. Example use cases of storm are preventing credit card fraud in real time and sending real time offers to customers based on their location or usage.
Spark is a in-memory data processing engine which can either run either inside Hadoop (on YARN), or in Mesos, standalone, or in cloud. It can access data from multiple sources like HDFS, Cassandra, HBase, and S3. It runs faster than MapReduce as it has DAG (Directed Acyclic graph) execution engine that supports acyclic data flows and in-memory computing. It offers connectors to write applications in languages like Java, Scala, Python and R. Spark provides libraries for handling streaming (Spark Streaming), machine learning (MLLib), SQL capabilities (Spark SQL), and processing graphs (GraphX).
Tez is designed to build application frameworks which allow for processing complex DAG of tasks in short time. It is built on top of YARN. It maintains the scalability of MapReduce while improving the speed dramatically compared to MapReduce. This is the reason other projects like Hive and Pig use Tez as the execution engine. Data processing in Tez is modeled as data flow graph with vertices representing the tasks and edges representing the flow of data. Each vertex running data processing logic is composed on Inputs, Processors and Outputs.
|
OPCFW_CODE
|
[RFC 00/10] implement alternative and much simpler id allocator
mawilcox at microsoft.com
Fri Dec 16 19:14:11 UTC 2016
From: Andrew Morton [mailto:akpm at linux-foundation.org]
> On Thu, 8 Dec 2016 02:22:55 +0100 Rasmus Villemoes
> <linux at rasmusvillemoes.dk> wrote:
> > TL;DR: these patches save 250 KB of memory, with more low-hanging
> > fruit ready to pick.
> > While browsing through the lib/idr.c code, I noticed that the code at
> > the end of ida_get_new_above() probably doesn't work as intended: Most
> > users of ida use it via ida_simple_get(), and that starts by
> > unconditionally calling ida_pre_get(), ensuring that ida->idr has
> > 8==MAX_IDR_FREE idr_layers in its free list id_free. In the common
> > case, none (or at most one) of these get used during
> > ida_get_new_above(), and we only free one, leaving at least 6 (usually
> > 7) idr_layers in the free list.
> I expect we'll be merging patches 1-32 of that series into 4.10-rc1 and
> the above patch (#33) into 4.11-rc1.
Thanks for your work on this; you've really put some effort into proving your work has value. My motivation was purely aesthetic, but you've got some genuine savings here (admittedly it's about a quarter of a cent's worth of memory with DRAM selling for $10/GB). Nevertheless, that adds up over a billion devices, and there are still people trying to fit Linux into 4MB embedded devices.
I think my reimplementation of the IDA on top of the radix tree is close enough to your tIDA in memory consumption that it doesn't warrant a new data structure.
On a 64-bit machine, your tIDA root is 24 bytes; my new IDA root is 16 bytes. If you allocate only one entry, you'll allocate 8 bytes. Thanks to the slab allocator, that gets rounded up to 32 bytes. I allocate the full 128 byte leaf, but I store the pointer to it in the root (unlike the IDR, the radix tree doesn't need to allocate a layer for a single entry). So tIDA wins on memory consumption between 1 and 511 IDs, and newIDA is slightly ahead between 512 and 1023 IDs. Above 1024 IDs, I allocate a layer (576 bytes), and a second leaf (832 bytes total), while you just double to 256 bytes. I think tIDA's memory consumption then stays ahead of new IDA. But performance of 'allocate new ID' should be better for newIDA than tIDA as newIDA can skip over all the cachelines of full bitmaps.
Yesterday, I found a new problem with the IDA allocator that you hadn't mentioned -- about half of the users of the IDA data structure never call destroy_ida(). Which means that they're leaking the preloaded bitmap. I have a patch which moves the preloaded IDA bitmap from being stored in the IDA to being stored in a percpu variable. You can find it here: http://git.infradead.org/users/willy/linux-dax.git/shortlog/refs/heads/idr-2016-12-16 I'd welcome more testing and code review.
More information about the dri-devel
|
OPCFW_CODE
|
The editor always starts in Build Mode. You can switch to Simulate Mode by clicking the "Simulate" button along the bottom toolbar.
Click a component in the Build Box to select it, and then click somewhere on the grid to insert into your circuit.
Double-click any component in your circuit to bring up the corresponding parameter editor.
Holding down the Ctrl key while clicking and dragging on the grid will allow you to pan the viewport. Using the mousewheel will zoom.
(Note: please avoid holding Ctrl while actuating the mousewheel, as this generally causes the browser to attempt to zoom on its own. CircuitLab is not compatible with browser zoom, and currently has no way to detect this condition.)
Every voltage in CircuitLab is calculated relative to the ground (GND) node, which is by definition at 0 volts. This means that every circuit has to have at least one GND element, or the circuit will not simulate.
The concept of a ground in a circuit simulator is similar but not identical to the concept of an electrical ground in the physical world. In real life, ungrounded battery-operated circuits work just fine, because to the circuit, only relative voltages matter. However, inside a circuit simulator (or even when solving a circuit on paper!), we have to pick one node to be our reference in order to calculate voltages at other nodes.
A node in an electrical circuit is a place where two or more circuit elements meet. A node in CircuitLab is the same thing: a point where two or more elements that are connected by a wire. By definition, the two endpoints of elements that are connected by a wire have the same voltage.
It is perfectly valid (and often more compact) to connect two or more endpoints of a circuit elements together without expcitly drawing a wire between them.
It is often very useful (and good practice) to name certain nodes in your circuit. You can do this by using the Name Node circuit element. A Name Node can be dropped onto a wire or directly onto any circuit element's endpoint.
You can "connect" two nodes in your circuit by naming them the same thing. Giving two nodes the same name is equivalent to drawing a wire between two nodes.
It is valid to have more than one name on one node. If that is the case, the node can be referenced by any of its explicitly created names.
If a node is not given a name using a Name Node element, then it is assigned a name by CircuitLab. Unamed nodes will have the prefix un. These automatically-assigned names should not be depended on to stay consistent as you continue working on your circuit, so it's good practice to name nodes whose voltages you'll want to measure or plot.
The Voltmeter and Ammeter elements can be used to display the voltage across or the current through the element on the schematic. You can double click a Voltmeter or Ammeter element to bring up the Parameters Box where you can select "Show Voltage" or "Show Current". This will cause the DC voltage across the Voltmeter or the DC current through an ammeter to be displayed next to the element. These values will be updated whenever you run a DC Simulation, and will be rendered on the exports of your schematic.
Note: The values displayed will only be updated when you run a DC Simulation, so a new DC Simulation needs to be run whenever the circuit changes in order for the values dispalyed to be accurate.
CircuitLab allows you to use human-friendly metric prefixes for all of the numerical input boxes. For example, you can type "1k" for a resistance instead of typing "1000", or "22p" instead of "22e-12" (which works too!).
|k or K
(Note: if you're used to SPICE, where inputs are case insensitive and both "m" and "M" mean "milli", note that CircuitLab is different and follows the standard SI prefixes. Upper-case "M" refers to Mega, or 10+6, while lower-case "m" refers to milli, or 10-3.)
Every simulation type has a separate Outputs box where you can choose what you want to plot. When a simulation type is active (accordion box expanded) you can click on any wire or Node Name to plot the voltage at that node. Clicking on a circuit element's terminal will cause the current into the terminal to plotted, as well as the voltage at the terminal. Clicking a point where multiple circuit elements meet will cause all the currents going into the elements to be plotted, as well as the voltage at the node where they meet.
In some cases, as you click around your circuit to select outputs, you may capture more outputs than you intended to. In this case, it's good practice to simply remove the expressions you aren't interested in to keep your plots clean.
You can also plot custom expressions.
Click and drag within a plot to zoom in to a plot region. Double-click on the plot to restore original zoom.
Drag vertical and horizontal cursors onto the plot to calculate math functions like averages and integrals.
A DC simulation attempts to find a stable DC solution of your circuit. When time-varying components are present, their long-term behavior is approximated -- for example, capacitors become open circuits, and inductors become short circuits. After running a DC Solve, you can mouse over parts of your circuits to see currents and voltages in the lower right hand corner of the screen.
A DC simulation is analogous to probing around a circuit with a multimeter.
A DC Sweep will plot the DC solution of your circuit across different values of a parameter of a circuit element. You can sweep any numerical parameter of any circuit element in your circuit.
The parameter to be swept is specified in the form NAME.PARAM, where NAME is the name of the circuit element, and PARAM is the name of the parameter. For example, sweeping over V1.V would sweep over the parameter V of the circuit element named V1.
A DC sweep is analogous to making measurements while using an adjustable power supply or adjusting a potentiometer. (Of course, in the CircuitLab environment, a much wider range of parameters can be experimented with!)
A Time-Domain Simulation does a transient analysis of your circuit over a certain period of time.
CircuitLab uses the dynamic model of the elements in your circuit work out the voltages and currents in your circuit at every time step. This means that it is very important to choose an appropriate time step for your transient simulation. If your time step is too large, the dynamic model will be inaccurate and the simulation could potentially look nothing like the real-life circuit would. If your time step is too small, your circuit may take too long to simulate.
A good rule of thumb when running a transient analysis is to pick your time step to be 10 times faster than than the fastest signal in your simulation. For example if the fastest source in your simulation is a 1 kHz sine wave, a good starting point would be to set the time step to 0.1m (0.1 milliseconds).
A transient simulation is analogous to using an oscilloscope to make observations about a circuit -- observing the full, non-linear behavior over a wide range of time scales.
Frequency Simulation does a small signal analysis of your circuit. The input can be any voltage source or current source. (Note: the input must be an element name, like "V1", and not a node name.) CircuitLab makes this chosen input a sine wave of magnitude 1 (by default), and will sweep the frequency from the chosen start frequency to the end frequency in Hertz.
A linearized, small-signal model of your circuit is generated from the DC operating point. Depending on your circuit, this model may only be accurate for very small signals, so frequency-domain analysis is usually complemented by time-domain analysis to reveal nonlinear effects.
Output voltages and currents reported are the magnitude of the voltage or current relative to the input, which by default is of magnitude 1. If your input is a voltage source, and you measure a different node's voltage in frequency-domain mode, the magnitude is a unitless gain (volts/volt), and any current you measure is a transconductance (amps/volt). Similarly, if your input is a current source, then any current you measure is a unitless gain (amps/amp), and any voltage you measure is a transimpedance (volts/amp).
There are three allowed forms of input source specifications:
The V(...), I(...), and P(...) outputs of frequency-domain simulation are all complex numbers -- they have a real and imaginary component, or a magnitude and phase. These complex quantities can be manipulated using various expressions such as REAL(x), IMAG(x), MAG(x), PHDEG(x), and more.
You may position up to two vertical and two horizontal cursor lines on each plot. These cursor lines can be used to measure the absolute difference between any two points on the plot. The positions of the two cursors are also used to calculate a variety of math functions, including average, root-mean-squared (RMS), and integral values.
To position a new cursor line, hover your mouse over the edges of the grid area, where two green lines should appear. Simply click and drag a cursor line to position it on the plot. Re-position a cursor line by clicking on the rectangular handle along the middle of the line. To remove a cursor line, simply drag it off the grid area, and it will disappear.
To apply one of the built-in math functions to a particular trace, right click that trace in the legend and select the calculation you want from the context menu. The result of the calculation will be displayed in the info box in the lower-left corner of the plot. To remove an applied function, click the 'X' that appears next to it in the lower-left info box.
CircuitLab is an in-browser schematic capture and circuit simulation software tool to help you rapidly design and analyze analog and digital electronics systems.
|New @ CircuitLab
|
OPCFW_CODE
|
return a const and non const wrapper object
If I want a custom container class to give access to its data through an iterator-like object (actually acting as a wrapper for some data in the container) and I want to be able to get both a const and a non const iterator-like object, one allowing only reading and one reading and writing, so I have to implement two different iterator-like objects; one which allows only reading and one which allows reading and writing, or can I wrap this functionality in one single object.
The issue is that I have to return this object by value but I cannot return a by-value object which cannot directly be put into a non const variable like
const accessor container::getConstAccessor(){/**/}
being misused like
accessor a=myContainer.getConstAccessor(); //effectively giving me a non const
The only solution I can see is to have two accessor classes/structs. One which acts const and one which acts readWrite, regardless of wether they are in a const or non const variable.
This emulates perhaps a constIterator and iterator, but is this truly needed? Can you not make one accessor and return either a const or non const version from the container?
I tried rephrasing this question a few times, to make it most general, but if it makes sense, I am not entirely certain. I hope it does.
if you make it non-copyable, one cannot get a non-const instance from a const one (except for a nasty const_cast)
Thing about why STL classes (e.g. std::vector) implements const_iterator, and iterator as two distinct classes.
Can you not make one accessor and return either a const or non const version from the container?
Not really. Firstly, you need two accessors as you have to detect whether or not *this is const-qualified:
/* ??? */ my_container::getAccessor();
/* ??? */ my_container::getAccessor() const;
Then, if you're returning by value, you have no way of forcing the caller of getAccessor to store the return value in a const variable. That's why you need two different types if you want to enforce immutability in the const-qualified accessor:
accessor my_container::getAccessor();
const_accessor my_container::getAccessor() const;
Code repetition can very likely be avoided by implementing both accessor and const_accessor in terms of some template accessor_impl<T> class that can be instantiated with T/const T.
You might want to add that it should be possible to implement them both using a template instantiated with either T or const T (for whatever T makes sense internally).
Thanks to all who responded. In the end it was perhaps too flimsy a question, but in the end I did as suggested and made two distinct accessors. I will mark this as the answer.
|
STACK_EXCHANGE
|
package burrow
import (
"net/http"
"github.com/gorilla/mux"
)
type Route struct {
Pattern string
Handler HTTPHandler
}
type RouteHandler struct {
Prefix string
Routes []Route
}
func (h *RouteHandler) Subrouter(r *mux.Router) (subrouter *mux.Router) {
subrouter = r.PathPrefix(h.Prefix).Subrouter()
for _, route := range h.Routes {
subrouter.Handle(route.Pattern, route.Handler)
}
return
}
func (h *RouteHandler) MountMethods(methods []Method) {
for _, m := range methods {
if m.Route != "" {
method := m
h.Routes = append(h.Routes, Route{
Pattern: method.Route,
Handler: func(w http.ResponseWriter, r *http.Request) *ResponseMessage {
req := &RequestMessage{
Params: VarsInterface(mux.Vars(r)), //TODO get url params and merge w/ ivars
Method: &method.Name,
}
ctx := &RequestContext{
HTTPWriter: w,
HTTPReader: r,
Request: req,
}
return method.Execute(ctx)
},
})
}
}
}
func (h *RouteHandler) MountHelp(methods map[string]Method) {
//TODO include route help w/ order
helpRoute := Route{
Pattern: "/help",
Handler: func(w http.ResponseWriter, r *http.Request) *ResponseMessage {
helper := make(map[string][]string)
for _, method := range methods {
helper["methods"] = append(helper["methods"], method.Name)
if method.Route == "" {
helper["io_methods"] = append(helper["io_methods"], method.Name)
}
}
helper["info"] = append(helper["info"], "Use help/{method} for method help")
helper["info"] = append(helper["info"], "io_methods are only usable through websockets")
return SuccessMsg(helper)
},
}
subhelpRoute := Route{
Pattern: "/help/{method}",
Handler: func(w http.ResponseWriter, r *http.Request) *ResponseMessage {
name := mux.Vars(r)["method"]
if method, ok := methods[name]; !ok {
return cerrorf(RpcMethodNotFound, "The limit does not exist! %s", name).ResponseMessage()
} else {
return SuccessMsg(method)
}
},
}
h.Routes = append(h.Routes, []Route{subhelpRoute, helpRoute}...)
}
type HTTPHandler func(w http.ResponseWriter, r *http.Request) (msg *ResponseMessage)
func (handle HTTPHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// info("Request - %v", r)
response := handle(w, r)
if response != nil {
content, err := response.Marshal()
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
} else if response.Error != nil {
http.Error(w, string(content), response.Error.Code)
} else {
w.Header().Set("Content-Length", sprintSizeOf(content))
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Content-Type", "application/json")
w.Write(content)
}
}
}
// From http://www.jsonrpc.org/specification
// Content-Type: MUST be application/json.
// Content-Length: MUST contain the correct length according to the HTTP-specification.
// Accept: MUST be application/json.
func (h *RouteHandler) MountRpc(ms map[string]Method) {
methods := ms
route := Route{
Pattern: "/rpc",
Handler: func(w http.ResponseWriter, r *http.Request) (msg *ResponseMessage) {
switch {
case r.Method != "POST":
msg = cerrorf(http.StatusMethodNotAllowed, "Requires method: POST").ResponseMessage()
case r.Header.Get("Content-Type") != "application/json":
msg = cerrorf(http.StatusUnsupportedMediaType, "Requires Content-Type: application/json").ResponseMessage()
case r.Header.Get("Accept") != "application/json":
msg = cerrorf(http.StatusNotAcceptable, "Requires Accept: application/json").ResponseMessage()
case r.Header.Get("Content-Length") == "":
//TODO is it necessary to asset lenght is correct?
msg = cerrorf(http.StatusLengthRequired, "Requires valid Content-Length").ResponseMessage()
default:
if req, rerr := ReadRequestMessage(r.Body); rerr != nil {
msg = cerrorf(rerr.Code, rerr.Message).ResponseMessage()
} else {
ctx := &RequestContext{
HTTPWriter: w,
HTTPReader: r,
Request: req,
}
if method, ok := methods[ctx.Request.MethodName()]; !ok {
msg = cerrorf(RpcMethodNotFound, "The method does not exist! %s", method).ResponseMessage()
} else {
msg = method.Execute(ctx)
}
}
}
return
},
}
h.Routes = append(h.Routes, route)
}
func (h *RouteHandler) MountIo(ms map[string]Method) {
methods := ms
route := Route{
Pattern: "/io",
Handler: func(w http.ResponseWriter, r *http.Request) (msg *ResponseMessage) {
if r.Method != "GET" {
msg = cerrorf(http.StatusMethodNotAllowed, "Only GET can be upgraded").ResponseMessage()
return
}
ws, err := upgrader.Upgrade(w, r, nil)
if err != nil {
msg = cerrorf(http.StatusBadRequest, "Request can't be upgraded").ResponseMessage()
return
}
c := NewConnection(ws)
if cerr := c.listen(methods); cerr != nil {
msg = cerr.ResponseMessage()
} else {
msg = SuccessMsg("WS connection closed succesfully")
}
//--> SUBSCRIBE on tile load -> {tileset, tileXYZ}
//--> UNSUBSCRIBE on tile unload -> {tileset, tileXYZ}
//--> LIST_SUBSCRIPTIONS
//<-- {tileset, tile, data, type}
msg = nil // Can't return a body...
return
},
}
h.Routes = append(h.Routes, route)
}
type RequestContext struct {
Request *RequestMessage
Connection *Connection
HTTPWriter http.ResponseWriter
HTTPReader *http.Request
Params MethodParams
}
func (ctx *RequestContext) Render(template string) (string, error) {
params := ctx.HTTPReader.URL.Query()
vars := make(map[string]string, len(ctx.Params)+len(params))
for k, v := range ctx.Params {
vars[k] = v.GetString()
}
for k, v := range params {
vars[k] = v[0] //only take first
}
return RenderTemplate(template, vars)
}
func (r *RouteHandler) MountRoutes(methods []Method) {
dict := make(map[string]Method)
for _, method := range methods {
dict[method.Name] = method
}
r.MountIo(dict)
r.MountRpc(dict)
r.MountMethods(methods) //preserve order
r.MountHelp(dict)
}
func VarsInterface(vars map[string]string) map[string]interface{} {
ivars := make(map[string]interface{})
for k, v := range vars {
// cast nums to float64s
if fv, err := atof(v); err == nil {
ivars[k] = fv
} else {
ivars[k] = v
}
}
return ivars
}
|
STACK_EDU
|
package net.hudup.core.logistic.ui;
import java.awt.GridLayout;
import java.util.List;
import javax.swing.BorderFactory;
import javax.swing.ButtonGroup;
import javax.swing.JPanel;
import javax.swing.JRadioButton;
import net.hudup.core.Util;
import net.hudup.core.parser.TextParserUtil;
/**
* This class creates a graphic user interface (GUI) component as a list of radio buttons.
* <br>
* Modified by Loc Nguyen 2011.
*
* @author Someone on internet.
*
* @param <E> type of elements attached with radio buttons.
* @version 10.0
*/
public class JRadioList<E> extends JPanel {
/**
* Serial version UID for serializable class.
*/
private static final long serialVersionUID = 1L;
/**
* List of ratio entries. Each entry has a radio button {@link JRadioButton} and an attached object (attached element).
*/
protected List<Object[]> radioList = Util.newList();
/**
* Constructor with a specified list of attached object. Each object is attached with a radion button {@link JRadioButton}.
* @param listData specified list of attached object.
* @param listName name of this {@link JRadioList}.
*/
public JRadioList(List<E> listData, String listName) {
super();
setLayout(new GridLayout(0, 1));
if (listName == null || listName.isEmpty()) {
setBorder(BorderFactory.createEtchedBorder());
}
else {
setBorder(BorderFactory.createTitledBorder(
BorderFactory.createEtchedBorder(), listName) );
}
ButtonGroup bg = new ButtonGroup();
for (E e : listData) {
String text = e.toString();
JRadioButton rb = new JRadioButton(TextParserUtil.split(text, TextParserUtil.LINK_SEP, null).get(0));
bg.add(rb);
add(rb);
radioList.add(new Object[] { rb, e});
}
}
/**
* Getting the object attached with the selected radio button (selected item).
* @return object attached with the selected radio button (selected item).
*/
@SuppressWarnings("unchecked")
public E getSelectedItem() {
for (Object[] pair : radioList) {
JRadioButton rb = (JRadioButton)pair[0];
if (rb.isSelected())
return (E) pair[1];
}
return null;
}
}
|
STACK_EDU
|
The following links introduce terms used through out this section:
Key Stratoss LM concepts introduce the Stratoss™ Lifecycle Manager (LM) programming model used to model VNFs and Network Services.
Cloud DevOps best practices and principles are at the heart of the Stratoss LM solution. To scale any Cloud based networking program, a unified operations and engineering model is combined with a set of automation tools that can simplify and automate the complexities of an end-to-end VNF or Network Service lifecycle.
The Stratoss LM CI/CD tools and processes are designed to simplify and automate the following DevOps tasks:
- Onboard VNFs and design Network Services: Quickly integrate and package the lifecycle actions required to operate a VNF or any virtual or physical network appliance.
- Behaviour Testing in pre-production: Deploy VNFs to pre-production environments and easily script complex operational behaviour tests to ensure the onboarded VNF behaves as expected in all “day 1” or “day 2” lifecycle tasks.
- Deploy to production: Once fully tested, auto deploy to production environments.
- Monitor and Change: Monitor and sense environmental or VNF state and auto scale, heal or move components of the network service.
- Report and Resolve issues: Errors in lifecycle actions or VNF software found in production are reported and trigger an upgrade process that rebuilds a new version of the VNF.
The CI/CD Hub wraps the core Stratoss LM automation capabilities with tools that support the “day 0” VNF onboarding and testing processes and also the “day 2” VNF change management tasks.
The CI/CD Hub provides a set of tools that manage VNF and Network Services artifacts across the following NFV orchestration systems:
- LM NFVO: Packages of assembly descriptors and behaviour tests for network service versions are packaged and deployed to Stratoss LM instances .
- LM Generic VNFM: Software Images, resource descriptors, lifecycle scripts and behaviour tests that wrap a VNF or PNF are packaged and deployed to Stratoss LM and its resource managers.
- 3rd Party VNFMs: External VNF artifacts are packaged and deployed to 3rd party VNFMs.
- Virtual Infrastructure Managers: VNF component software image versions are deployed to VIMs.
The picture above shows a complete Stratoss LM CI/CD Environment. Stratoss LM design tools create VNF/Network Service descriptors and behaviour test scripts, using a Git repository as their source version control. Supplemental artifacts such as VIM software images are stored in a general repository such as Nexus.
Versions of VNF and Network Service packages are taken from these repositories and built and tested in development and pre-production environments. A CI server, such as Jenkins, pulls Stratoss LM artifacts from Git and deploys to LM build slaves, the CI server also pushes software images to the appropriate development or pre-production VIMs attached to the LM build slaves. Any included behaviour tests in the VNF or Network Service project are run by the CI server to validate this versions expected behaviour.
On successful completion of VNF or Network Service behaviour tests, the CI server uses Stratoss LM tools to package a version of a binary VNF and Network Service package and stores it in the general repository.
The CI/CD Hub provides a set of opensource software components that play the Git source control, general repository and CI server roles as defined above. The CI/CD Hub is a reference implementation intended to demonstrate how to implement a Stratoss LM CI/CD pipeline, it is not a supported product. The CI/CD Hub can be used to run a production pipeline, or you can swap components out and use the tool of your choice, but the CI/CD Hub project is intended for demonstration purposes only.
The CI/CD Hub reference implementation provides installer scripts to stand up and attach the following software tools to your Stratoss LM design and build slaves.
- Git: A lightweight Gogs Git repository is installed as the descriptor and behaviour script source control server
- Nexus: Nexus general repository is installed to provide a general image and package repository.
- Jenkins: Jenkins CI Server is installed to automate the package build and release processes.
A basic getting started guide and instructions how to run a “hello world” demo is also installed to the Gogs server. You can learn more about the CI/CD Hub software here.
As stated above, development and build slave Stratoss LM instances need to be in place and attached to the CI/CD Hub as appropriate, please follow LM installation guide and additional configuration detailed in the CI/CD Hub guide to “connect” LM instances:
Stratoss LM Design Tools: Stratoss LM instances for designing descriptors combined with the LMCTL command line tool push/pull VNF or Network Service projects to the CI/CD Hub Git repository.
Stratoss LM Build Slaves: Stratoss LM instances can be configured to use the CI/CD Hub shared services, e.g. OpenLDAP and managed by the Jenkins CI Server to auto deploy, test and package VNFs or Network Services.
VNF and Network Service Packages
The CI/CD Hub extends “standard” Cloud software toolchains with the lmctl tool that manages Stratoss LM packages.
The following packages need to be managed by the LM CI/CD process and tools:
- Network Service Package: Network service descriptors that organise VNFs are combined with behaviour tests.
- Native VNF Packages: VNF artifacts designed and built to be run in the Stratoss LM VNFM.
- Foreign VNF Packages: VNF artifacts designed and built to be run on a 3rd party VNFM.
VNF and Network Service packages can be versioned and distributed to Stratoss LM environments. Stratoss LM command line tools aid in the creation and management of these binary NFV packages.
The sections below give an overview of the types of packages included in the CI/CD process.
A VNF package can contain the following artifacts:
- VNF Descriptor: This assembly descriptor declares properties and values, organises any children VNFCs and defines operations and policies.
- VNFC resource descriptors: This resource descriptor declares properties, supported lifecycle actions and any metrics produced by the VNFC
- VNFC Lifecycle scripts: Depending on the resource manager used to execute lifecycle actions, appropriate scripts or software is provided that “run” each supported lifecycle action.
- VNF Behaviour tests: Stratoss LM behaviour tests are included that run the VNF and its VNFCs through a set of functional tests.
Network Service Package
A typical Network Service package will contain the following artifacts:
- Network Service Descriptor: This assembly descriptor declares properties and values, organises any children VNFs and defines operations and policies
- Network Service Behaviour tests: Stratoss LM behaviour tests are included that run the Network Service and its VNFs through a set of performance and operational interoperability tests.
Network Service and VNF Packages have a simple state model.
- Development: VNF or Network Service engineers are in the early stages of package development and perform their own local testing.
- Pre-Production/Test: Packages are ready for exhaustive testing triggering a “build”.
- Production: Packages have passed all exhaustive testing and have been deemed ready for production.
The CI/CD methodology and process can handle packages in these various states appropriately. As seen above, Network Service package development are dependent on VNFs being in pre-production state or higher.
The picture above shows the types of environments and supplemental packages in a typical package workflow.
Onboarding/development tasks typically use shared local Stratoss LM and virtual VIM environments to perform the initial creation of VNF or Network Service packages. In addition the engineer will typically create his/her own unit test style VNF packages that are used to test the target VNF or Network Service package behaviour is operationally correct.
In the pre-production/test stage, packages are moved to environment representative of the production environment. Stratoss LM and VIM environments are typically dedicated to running performance and interoperability tests. The environments are owned by the automated pipeline.
Once fully tested, packages are available to be deployed to the Production environment.
Behaviour testing a VNF or Network Service requires a set of Test VNFs be developed that run functional tests or generate and monitor traffic. Test VNF package lifecycles are identical to the VNFs under test, binary packages with versions are run through the same CI/CD process.
In addition to the Test VNF package that performs the actual test, a set of behaviour scripts are included with the test package that run a series of tests and evaluate of the behaviour reported by one of more test VNFs is as expected.
See behaviour testing for more details
VNF CICD Process
This sections lays out the VNF package lifecycle workflow.
- Load Images: Upload one or more VNFC software appliance images to the general repository.
- Create VNFC Lifecycle: For each VNFC in the VNF, create a resource lifecycle and include the scripts or software that implement the standard lifecycle actions.
- Create/Load VNF Package: For native packages, create a new package version. For foreign packages load a version.
- Create Test Packages: Create the test packages that will test VNF behaviour.
- Create or clean environment: Create or clean a development or pre-production environment.
- Load VNF package version: Load the VNF under test into the target environment.
- Load versions of test packages: Load dependent test packages
- Run test packages and store: Run behaviour test and store the results.
- Progress package to next state/stage: On success create a binary package with a date and a version.
Network Service CICD Process
This sections lays out the Network Service package workflow.
- Design network service: Design network service, including one of more VNF descriptors.
- Create package version: Create a network service package version.
- Create Test packages: Include test packages and behaviour tests that will evaluate the network service behaviour.
- Create or clean environment: Clean or create a development or pre-production environment.
- Load dependent VNF packages: Load the network service and all of its dependent VNF packages to the target environment.
- Load version of test packages: Load the test packages required to run behaviour scripts to the target environment.
- Run Tests and store package on success: Run network service behaviour tests and store the results.
- Move package to next state/stage: On success, create a dated version of the network service and store in the general repository.
Next Steps and further reading
To get started with a new project see getting started guide.
To learn more about the CI/CD Hub software, read the software overview section
|
OPCFW_CODE
|
I am testing the fixed Model to Model distance extension. It can now calculate the point-to-point correspondence on windows. But I am having a hard time making sense of the resultant heatmap. As a reminder this is the same 3D model that a geometric transformation applied, resulting in a slightly different shape. This is the output I get.
Lets focus on the incisors as it displays the problem clearly. How can the front of the incisor is colored about -1.5mm, while the back of it is +1.5mm. The scalar plotted is the signed point to point distance.
If you measure approximately the corresponding distances on the source and target models, the magnitude of the difference at both front and back is right, but I don’t understand the sign difference in the model 2 model output.
any input on this?
@jcfr @pieper @lassoan
Did you put the data someplace where people can easily replicate the issue?
You can try with any of these
I usually set the mean as target and pc1 as source.
When you get the signed distance it is as defined here and that probably works well for simple shapes but not so well for these complex surfaces. The
x-p vectors for the front and back of the tooth would be similar length and the same size, but the surface normals are opposite, so that would be why the sign shifts. The AbsolutePointToPointDistance (first image below) makes sense to me but the signed distance (second image) doesn’t have much meaning for this data.
Not quite sure why the surface normal would matter for a point to point distance calculation. For every vertex on source model, there is corresponding vertex on the target. It should simply calculate the distance between these vertices (in a paired sense), and the sign of the distance should come from this calculation (positive for distance larger in the target, and negative for the opposite).
Whatever the justification might be for it, it doesn’t make much sense to me.
I think the current behavior does make sense and the root of the issues is the lack of a consistent interpretation for “sign” in general for a surface like this one. You are correct that the point-to-point distance is very clear, and that is a positive number, the square root of the sum of the squares, defined only by the two points involved. But the “sign” would need to be with respect to some outside reference. In a simple case, say scaling a sphere about the center, the signed distance as calculated by the filter makes sense, because the surface normal of the sphere is a good approximation of the overall shape so that when the scale makes the sphere smaller the sign is negative and when you scale it larger the sign is positive. When the surface normals are not representative of the overall shape you get situations like the back of the tooth.
Perhaps what you really want is to calculate the principle moments or axes of the object and then get the dot product of the distance vector with those vectors to define the sign of the distance (effectively like approximating the object with an ellipsoid). I don’t see that the ModelToModel module implements this. It would be pretty simple to script up in python in Slicer using the SegmentStatistics module and some numpy manipulations.
In general defining the sign is going to be application-dependent, so picking the reference for the sign should be a part of the analysis design. For example I can imagine two skulls like yours where everything would be the same but the teeth would be larger in one than the other. In that case you’d want to define the sign of the distance in terms of the local axes of the tooth, not the full model of the head. Again I think the building blocks are all there in Slicer and just some code is needed.
It may also help if you remove the bulk linear transformation component and only visualize the remaining local deformation component.
|
OPCFW_CODE
|
Find and remove over SSH?
My web server got hacked (Despite the security team telling me nothing was compromised) and an uncountable number of files have an extra line of PHP code generating a link to some Vietnamese website.
Given there are tens of thousands of files across my server, is there a way I can go in with SSH and remove that line of code from every file it's found in?
Please be specific in your answer, I have only used SSH a few times for some very basic tasks and don't want to end up deleting a bunch of my files!
Yes, a few lines of shell script would do it. I hesitate to give it to you, though, as if something goes wrong I'll get blamed for messing up your web server. That said, the solution could be as simple as this:
for i in `find /where/ever -name '*.php'`; do
mv $i $i.bak
grep -v "http://vietnamese.web.site" $i.bak >> $i
done
This finds all the *php files under /where/ever, and removes any lines that have http://vietnamese.web.site in them. It makes a *.bak copy of every file. After you run this and all seems good, you could delete the backups with
find . -name '*.php.bak' -exec rm \{\} \;
Your next task would be to find a new provider, as not only did they get hacked, but they apparently don't keep backups. Good luck.
Ernest,
There are php and HTML files that are affected, if I just use * would it check every file?
Of course, the actual code is a piece of PHP so it's not an actual link until it's parsed. I can put the <?php code> in place of the URL?
I will ask my host if they can do this for me to ensure it's done properly. If not, I will give it a go, I'm not going to try and do it manually.
All of these commands are very flexible. If you want the find to find all *.htm as well as all *.php files, then you could use predicates like find /where/ever -name '*.php' -or -name '*.htm'. The URL is just an example, use whatever unique string you can come up with to identify the lines you need to remove.
First create a regex, that matches the bad code (and only the bad code), then run
find /path/to/webroot -name \*.php -exec echo sed -i -e 's/your-regex-here//' {} \;
If everything looks right, remove the echo
I do it following way. E.g. to delete files matching particular name or extension.
rm -rf * cron.php. *
rm -rf * match_string *
where match_string will be any string. Make sure there will be no space between * and string name.
rm -f cron.php.*
Delete all file in this folder called cron.php.[whereveryouwant]
|
STACK_EXCHANGE
|
oh fuck oh fuck it just keeps shrinking
(this is actually great for browsing, since the url bar is at the bottom, on the addon bar, i can hide it when i'm not using it!)
That's a lot of feeds
No idea, but you might want to browse content without all the speedtest crap.
God dammit it's Speedtest all over again
but with browser shots!
Mac versus PC rants
and Flame Wars
By recycling content, you are doing your part, together, we can save a small portion that would eventually be filled.
This has been a CIPWTTKT Content Recycling PSA,
Thank you for your time
EDIT: @Baklr 2.0
1. Get a CIPWTTKT thread
2. Press Ctrl+F
3. Search Content
4. Switch page when finished searching through
So today I decided to try Minecraft on the upgraded (C2Ds with 2 GB of RAM woo) computers in the lab.
I download the legit version from minecraft.net and what do I see? It's not working, it won't connect and if it does, it gives out a graphic driver error.
I see some random dude downloaded a pirated copy and put it in "My Documents". I try it out, guess what?
It's working perfectly :psyduck:
I don't know why but these things run Minecraft way worse than my Laptop. I guess it's because I use a superior OS :smugdog:
oh look, a Facebook notification.
I have a different Firefox persona under Windows, and I can't remember for the life of me what I did with Chrome.
whoa dear how'd that tooltip get in the screenshot
My laptop just booted up and started ChkDsk. Is this bad? I know what ChkDsk does and it doesn't seem to be reporting any problems except it found a corrupted file or some sort.
I'm pretty sure this is why I am not a mod... Because I would have already banned everyone for posting their browser toolbars... and gone back and banned everyone for posting their stupid ass speed tests...
Stop the browser images or :frog:
quick, someone post a screenshot of a speedtest, showing off your browser bar, with speccy on and a list of your hard drive names.
question to the mods: how hard would someone get banned for this?
I'll do it then
Ba! Ba! Ba ba ba! Space! Ba! Ba! Ba ba ba!
Oh thats where my laptop HDD went
And so was I:
It's become a fad in itself! :psyboom:
My heart just stopped
I can see the light
grandma, is that you...?
|
OPCFW_CODE
|
EVA ArchitectureEVA's architecture in a nutshell
This page provides an overview of EVA's architecture. The tech structure, how it functions, events and messaging, open source frameworks used, and more.
Microservices and their bounded context
EVA is not a microservice platform. The core of EVA platform offers all available functionalities but can be surrounded by separate (micro)services that provide additional functionalities or enhance existing ones. The EVA platform is designed to effortlessly adopt the latest backend applications, frontend frameworks, and innovative solutions.
API’s with descriptive endpoints
You can explore the EVA API using the Service Explorer tool known as DORA.
A list of all services: API reference
Technical tenant structure for multi-tenant setups
To support multi-tenant set-ups, EVA follows these principles:
- Every EVA tenant is hosted separately, meaning no resources or data structures are shared between our customers. Environments (development, test and staging) are kept completely separated. However, some processing resources may be shared within your organization environments (for example, between your development and test environments).
- Environments within EVA are scripted for swift and easy deployments.
- Hierarchical Organization structure, such as region, country, and store, can be designed within EVA environments. This hierarchy facilitates permission controls and role management.
- Authentication can be based on the default EVA authentication mechanism, but can also be linked to external parties (OAuth/OpenID).
- EVA environments can be hosted in a multi-region setup, where one region serves as the primary source of truth for your organization setup and settings. Other regions, known as Slave regions, can then function independently and periodically report transactions to the Master region.
- Users can be allowed to act globally and log in to every region (Single Sign-On). This flexibility extends to other data aspects, all tailored to support your specific business requirements.
EVA is an event and message driven platform. Nearly every API request generates one or more (event) messages distributed through the message-broker (RabbitMQ). Consumers in the core application or add-ons (Plugins) can act on these messages, enhancing core functionality.
Orchestration patterns used for microservices
EVA is a “Contextual” driven service platform. Services are orchestrated based on context and depend on context within the named processes. For example, “add a product to the shopping basket” triggers new processes to validate order and customer requirements. As a result, microservices in EVA aren't orchestrated like those in a typical microservices platform.
Open source frameworks and rationale
The EVA platform is built with the best of technology available, which is why the core uses a variety of open source frameworks and libraries.
- Elasticsearch: The standard for searching documents, supporting product catalog (PIM) searches, order processing, user databases, and more. Every change in any of those is mirrored to Elastic for fast and convenient searching.
- Redis: A fast key-value store used for user session data and short-lived caching.
- RabbitMQ: The foundation of EVA's eventing and messaging architecture.
Over time, EVA has grown into an extensive set of applications and modules. Although not all facets adhere to the same release schedule, they are consistently released and in a timely manner.
Releases are published centrally for all customers, allowing them to review the release notes for the latest versions, updates, and deprecations. The following four frontend functionalities are released uniformly:
More on release cycles can be found in the Introduction to releases page.
Customers will have a full release (four weeks) to update their versions, adopt the applicable changes introduced, and are allowed to be one release behind on our 'lastest-and-greatest'.
|
OPCFW_CODE
|
Collect Earth is a user-friendly, Java-based tool that draws upon a selection of other software to facilitate data collection. The following training materials include guidance on the use of Collect Earth and most of its supporting software. This information is also available online and in video format at www.openforis.org. Documentation on the more technical components of the Collect Earth system (including SQLite and PostgreSQL) is available on the Collect Earth Github page. Collect Earth runs on Windows, Mac and Linux operating systems.
Collect Earth uses a Google Earth interface in conjunction with an HTML-based data entry form. Forms can be customized to suite country-specific classification schemes in a manner consistent with guidelines of the Intergovernmental Panel on Climate Change (IPCC), the European Commission (EC), the Food and Agriculture Organization of the UN and other international entities. The default Collect Earth form contains IPCC-consistent land use categories and sub-categories with land use sub-divisions from the European Commission’s Land Use/Cover Area frame Survey (LUCAS). For guidance on creating new customizations of the Collect Earth data entry form, visit the Collect Earth GitHub page. Chapter 3 explains the process of reviewing satellite imagery, assessing land use and land use change, and assigning attributes to sampling points through the Collect Earth data form.
Satellite imagery in Google Earth, Bing Maps and Google Earth Engine
Collect Earth facilitates the interpretation of high and medium spatial resolution imagery in Google Earth, Bing Maps and Google Earth Engine. Google Earth’s virtual globe is largely comprised of 15 meter resolution Landsat imagery, 2.5m SPOT imagery and high resolution imagery from several other providers (CNES, Digital Global, EarthSat, First Base Solutions, GeoEye-1, GlobeXplorer, IKONOS, Pictometry International, Spot Image, Aerometrex and Sinclair Knight Merz). Microsoft’s Bing Maps presents imagery provided by Digital Globe ranging from 3m to 30cm resolution. Google Earth Engine’s web-based platform facilitates access to United States Geological Survey 30m resolution Landsat imagery. Collect Earth synchronizes the view of each sampling point across all three platforms.
The imagery used within Google Earth, Bing Maps and Google Earth Engine differ not only in their spatial resolution, but also in their temporal resolution. Collect Earth enables users to enter data regarding current land use and historical land use changes. Users can determine the reference period most appropriate for their land use monitoring objectives. The IPCC recommends a reference period of at least 20 years based on the amount of time needed for dead organic matter and soil carbon stocks to reach equilibrium following land-use conversion. Most of the imagery available in Bing Maps and Google Earth have been acquired at very irregular intervals over the past 10 years. In contrast, Earth Engine contains over 40 years of imagery that has been acquired every 16 days. The description of how to use Collect Earth in Chapter 3 includes guidance on navigating the strengths and weakness of these three imagery repositories to develop a more complete understanding of land use, land use change and forestry in a given site.
Sampling design in QGIS
QGIS is a free and open-source geographic information system that can be used to process data that can support the land use classification process. Where existing land use or land cover data is available in a spatial format, users can convert vector (points, line, polygons) and raster (images) data into KML files that can be viewed in Google Earth during a land use classification with Collect Earth. KML files are also compatible with Google Fusion Tables and can be imported into Google Earth Engine.
Chapter 5 provides instructions on converting spatial data and also creating a sampling grid. A default, coarse (5km x 5km) grid of sampling points is available for download on the Collect Earth website. However, a medium or a fine scale grid comprised of more points is recommended for a full and robust LULUCF assessment for a country or sub-national project site. Chapter 5 explains the process of generating a sampling grid and populating its attributes table to ensure compatibility with Collect Earth.
Database options: SQLite and PostgreSQL
The data entered in Collect Earth is automatically saved to a database. Collect Earth can be configured for a single-user environment with a SQLite database. This arrangement is best for either individual users or for geographically disperse team. A PostgreSQL database is recommended for multi-user environments, particularly where users will work from a shared network. The PostgreSQL configuration of Collect Earth facilitates collaborate work by allowing users to see in real time when new data has been entered. It also makes it easier for an administrator to review the work of others for quality control purposes.
Data analysis with Saiku Server
Both types of databases automatically populate Saiku Server, an open-source web-based software produced by Meteorite consulting. A version of this open-source software has been customized for visualizing and analyzing Collect Earth data. Countries using Collect Earth for a national land use assessment may generate data in Collect Earth for tens of thousands of points. Saiku organizes this wealth of information and enables users to run queries on the data and immediately view the results in tabular format or as graphs. Chapter 4 explains how Saiku users to can quickly identify trends and prepare inputs for LULUCF reporting to the UNFCCC and other entities involved in the sector.
Image analysis with Google Earth Engine
Collect Earth facilitates land use assessment through a sampling approach rather than wall-to-wall mapping. However, land use data (point vector files) generated with Collect Earth can be used as training sites for wall-to-wall image classifications. Chapter 6 reviews the procedure for using Collect Earth data to conduct a supervised (wall-to-wall) classification in Google Earth Engine.
|
OPCFW_CODE
|
printf tilde operator in c
I know that the ~ operator is NOT, so it inverts the bits in a binary number
unsigned int a = ~0, b = ~7;
printf("%d\n",a);
printf("%d\n",b);
printf("%u\n",a);
printf("%u\n",b);
I guessed 0 will be 1 and 7 (0111) will be 8 (1000) but the output was
-1
-8
4294967295
4294967288
how did ~0 and ~7 become -1, and -8? also why is %u printing that long number?
This program exhibits undefined behavior, by using format specifier that doesn't match the type of the argument. %d expects an argument of type int, but you are passing unsigned int. The "long number" is an unsigned int value with all bits set to 1. The value 0 consists of 32 zero bits, so ~0 is 32 one bits.
@IgorTandetnik tried int and unsigned int but it prints same output.
a and b may be unsigned int, but 0 and 7 are int, so the code is negating signed integers before assigning the results to unsigned variables. But undefined behavior is still undefined behavior.
The ~ operator may set the most significant bit to a 1. In 2's complement for signed integers, the most significant bit is used as the sign bit. When the sign bit is a 1, the value is negative.
7 is not 0111, it's<PHONE_NUMBER>0000000000000000000111 (assuming a platform where int is 32 bits). That's why the inverse is a very large number - it has lots of one bits.
@ThomasMatthews but I set it as unsigned int. can it still be negative?
@RemyLebeau ohhh thank you I now get it
Use x --> printf("%x\n",a); for more insight.
The ~ operator simply inverts all bits in a number.
On most modern compilers, int is 32 bits in size, and a signed int uses 2's complement representation. Which means, among other things, that the high bit is reserved for the sign, and if that bit is 1 then the number is negative.
0 and 7 are int literals. Assuming the above, we get these results:
0 is bits<PHONE_NUMBER>0000000000000000000000b
= 0 when interpreted as either signed int or unsigned int
~0 is bits<PHONE_NUMBER>1111111111111111111111b
= -1 when interpreted as signed int
=<PHONE_NUMBER> when interpreted as unsigned int
7 is bits<PHONE_NUMBER>0000000000000000000111b
= 7 when interpreted as either signed int or unsigned int
~7 is bits<PHONE_NUMBER>1111111111111111111000b
= -8 when interpreted as signed int
=<PHONE_NUMBER> when interpreted as unsigned int
In your printf() statements, %d interprets its input as a signed int, and %u interprets as an unsigned int. This is why you are seeing the results you get.
The ~ operator inverts all bits of the integer operand. So for example where int is 32-bit, 1 is 0x00000001 in hex and it's one's complement is 0xFFFFFFFE. When interpreted as unsigned, that is 4 294 967 294, and as two's complement signed, -2.
|
STACK_EXCHANGE
|
Jellyfiction The Mech Touch – Chapter 2936: Doctor Avalon Perris weigh apologise -p2
Novel–The Mech Touch–The Mech Touch
Chapter 2936: Doctor Avalon Perris sofa pink
Despite the fact that Ves only had a limited comprehension of biomechs, he could already tell that this was not a simple specialization to focus on. Although all biomechs possessed some self-regeneration abilities, their process of healing was very slow-moving without outside support. Much like our figures, it could take weeks or a few months to restore moderate wounds!
Once he experienced with his fantastic staff experienced built their preparations, they all inserted the questionable cargo compartment although fully kitted out for a unsafe objective.
“What the heck is your specialty?”
“My plan is to are dedicated to speedy personal-regeneration. I have studied this subject extensively in my free time and that i have already created some methods that might boost the regeneration of smooth natural and organic cells under niche situations.”
Which had been sufficient for Ves. It had been useless for him to problem any additional cautions. He considered the gem was clever enough for making the reasonable choice to cooperate and engage in as well as Ves. On condition that their goals didn’t conflict against one another, they could both get whatever they wanted!
Despite the fact that this is a hazardous final decision, he acquired already respected the gem for this point. He decisively stimulated an external comm graphical user interface, permitting the jewel to enter particular orders by directing Ves to push unique b.you.t.loads.
Everyone was conscious of this probability, so no onee shifted impulsively. Even Blessed was content to rest on Ves’ shoulders, his tail flicking with fear.
“You wish use of a comm graphical user interface?”
Although Health practitioner Perris also acquired her faults, Ves was certain that he could deal with them on condition that he nurtured her adequately.
There have been 3 good reasons why Ves paid care about Avalon Perris.
arcadia’s labyrinth novel
Even though this was a risky final decision, he obtained already respected the treasure to the magnitude. He decisively turned on another comm graphical user interface, enabling the gem to insight certain directions by directing Ves to touch distinct b.u.t.lots.
Section 2936: Doctor Avalon Perris
“I performed as a possible a.s.sistant mech designer for among the biomech corporations centered in the world. I did so not design any biomechs by myself, but I a.s.sisted in the roll-out of a dozen unique types.”
Luckily for us, his treasure failed to leave him hanging. It vibrated a tad and tugged in the direction of his arm. Ves construed the gem’s decisions as greatest as it can be.
Chapter 2936: Health care professional Avalon Perris
That had been good enough for Ves. It had been pointless for him to situation any more alerts. He presumed the jewel was wise enough to help make the sensible determination to cooperate and engage in in conjunction with Ves. Given that their set goals didn’t clash against each other, they could both get anything they wished!
Of the eight people who were actually prepared to leap within the portal, one of those clearly separated itself. Women dressed in a sleeker accommodate of light-weight deal with armor awkwardly migrated her arms and legs as though she was nonetheless trying to get used to sporting a little something totally different from a hazard match
Ves picked up the gem when in front of his faceplate and shook it somewhat. “Ok, we’re below. Would you convey to the structure techniques to avoid healing me and my men as thieves? I don’t want to be crushed by ten thousand gravities.”
“You wish admission to a comm graphical user interface?”
The exploration team was approximately to venture through just about the most important study facilities associated with a subsequent-fee state. Harmless pa.s.sage was never confirmed especially as they weren’t originally certified to get in the lab in the first place.
Which was sufficient for Ves. It was subsequently useless for him to dilemma further cautions. He considered the jewel was practical enough to help make the rational selection to cooperate and have fun with together with Ves. Given that their goals didn’t conflict against each other, they could both get whatever they hoped!
The recognize defend stepped in primary. After they proved the fact that interior structure safeguarding failed to interact with their intrusion, the others put into practice match.
Ves lifted the gem in front of his faceplate and shook it a bit. “Alright, we’re below. Would you convey to the bottom systems to stay away from dealing with me and my gents as burglars? I don’t want to get crushed by ten thousand gravities.”
Ves was positive that there had been various layers of accessibility approval. The urgent rule that he just transmitted with the help of his sentient treasure should have just provided him and his awesome organization area-levels easy access.
The quick area around the portal on the reverse side experienced turn out to be very jampacked due to abundance of armored employees. They carefully stayed inside a noted radius of three m for anxiety about triggering the automatic base safeguarding.
She was truly the only non-combatant from the group. Simply because have been on the verge of discover a pinnacle research laboratory, how could they not provide a biotech skilled combined?
And this was only one of the opportunity risks that intruders of your pinnacle clinical had to confront. Ves failed to wish to consider any leap forward until his jewel managed to convince him how the way ahead was protected.
Regardless if she recently renounced her ident.i.ty like a Lifer, it was subsequently challenging to shake off an eternity of indoctrination and hero wors.h.i.+p! The Superior Sage was a popular figure within the Life Analysis a.s.sociation and practically anybody who joined the biotech market needed to wander in his footsteps!
|
OPCFW_CODE
|
Characteristics of an effective test suite
A useful suite of tests must reduce the overall effort in verifying that a product exhibits high structural and functional quality. A primary driver for any development team is that the test suite verifies high quality for each successive build. The test suites must be readily maintainable to prevent future regression failures.
Consider a common approach to testing. During feature development, a team runs a particular test case only a few times. After integrating the feature into a build, the test case is run after each code change to verify functional integrity and compliance aligned to a business rule or use case. Whether maintained by the build team or handed off externally in the future, the feature set will expand gradually with the regression suite expanding accordingly. Maintaining the regression test suite benefits the team, boosting the level of confidence in the build’s short-term and long-term quality.
A test suite defines the behavior expectations of a system as it is put to use in various test cases.
System-level test cases define user stories.
Unit tests define details on business rules.
Integration tests define contracts and integration flows that indicate all major dependencies.
While conventional requirements specification often becomes dated quickly, test cases are more dynamic and correspond more closely with all the aspects of the software design. As the team maintains the tests, they remain current with each build. For this reason alone, a good test suite should readily provide accurate specifications for a software system.
Building faster by testing early
System changes may include functionality that supports new use cases, additional steps in the workflow, and integrations with third-party software. The effort to manually verify software changes depends on scope and complexity. The time spent varies greatly, taking anywhere from a few minutes to hours. It's best to find bugs or problems soon after completing development changes since less effort will be needed to find a remedy earlier in the build.
Commonly, there is a deferral of testing. Instead of a developer, a tester eventually identifies, reproduces, and logs defects. Comparatively, most of the manual effort to identify, isolate, and fix the bug often doubles, at least for both testers and developers.
However, what if it was feasible to get feedback while still working on the code? If a developer could get immediate feedback, the adjustment could be part of the initial coding change effort, with no defect ever arising. The preliminary testing would alleviate much of the context-switching from development to QA. The feedback would be applicable to functional, structural, and business use-case testing.
In addition to the QA effort, the development effort to refactor will significantly increase if quality feedback is deferred to future code review. Accordingly, the test suite needs to offer the ability to provide immediate feedback, leading to more efficient use of resources.
Testing for efficiency and resiliency
Many tests are code or pseudo-code, for which there may be a high ongoing cost to maintain. With many aspects to test maintenance, the coupling of tests to system internals tends to be the most problematic. Tightly coupling a test suite to the internals of the system causes much fragility. Each change to the code base necessitates a change to one or more corresponding tests, even with no externally detectable behavior changes. Enforcing a high degree of test coverage becomes a big ask. Not surprisingly, most teams tend to avoid a high number of tedious test changes.
As complexity increases in a tightly-coupled test environment, many reach a point where they consider deleting extraneous test cases. Say, an incorrect identification of module boundaries needs testing, or such modules are merely leaky abstractions that expose all of the internals through the APIs.
The thinking is this: it takes more time to fix the tests than implement the code changes. It is, of course, quite impossible to completely decouple all tests from the software system. However, the result in the suite of end-to-end tests would not give enough confidence.
Ideally, it is best to pursue a loosely-coupled test suite that exhibits the minimum amount of coupling in critical areas.
The characteristics of an ideal test suite include:
all functionality works according to expectations,
existing functionality works (even when features are extended or new features are added),
a sustainable system architecture,
an accurate specification of the system,
a quick feedback loop,
and minimal maintenance effort.
|
OPCFW_CODE
|
Manual ACK for AggregatingMessageHandler
I'm trying to build integration scenario like this Rabbit -> AmqpInboundChannelAdapter(AcknowledgeMode.MANUAL) -> DirectChannel -> AggregatingMessageHandler -> DirectChannel -> AmqpOutboundEndpoint.
I want to aggregate messages in-memory and release it if I aggregate 10 messages, or if timeout of 10 seconds is reached. I suppose this config is OK:
@Bean
@ServiceActivator(inputChannel = "amqpInputChannel")
public MessageHandler aggregator(){
AggregatingMessageHandler aggregatingMessageHandler = new AggregatingMessageHandler(new DefaultAggregatingMessageGroupProcessor(), new SimpleMessageStore(10));
aggregatingMessageHandler.setCorrelationStrategy(new HeaderAttributeCorrelationStrategy(AmqpHeaders.CORRELATION_ID));
//default false
aggregatingMessageHandler.setExpireGroupsUponCompletion(true); //when grp released (using strategy), remove group so new messages in same grp create new group
aggregatingMessageHandler.setSendPartialResultOnExpiry(true); //when expired because timeout and not because of strategy, still send messages grouped so far
aggregatingMessageHandler.setGroupTimeoutExpression(new ValueExpression<>(TimeUnit.SECONDS.toMillis(10))); //timeout after X
//timeout is checked only when new message arrives!!
aggregatingMessageHandler.setReleaseStrategy(new TimeoutCountSequenceSizeReleaseStrategy(10, TimeUnit.SECONDS.toMillis(10)));
aggregatingMessageHandler.setOutputChannel(amqpOutputChannel());
return aggregatingMessageHandler;
}
Now, my question is - is there any easier way to manualy ack messages except creating my own implementation of AggregatingMessageHandler in this way:
public class ManualAckAggregatingMessageHandler extends AbstractCorrelatingMessageHandler {
...
private void ackMessage(Channel channel, Long deliveryTag){
try {
Assert.notNull(channel, "Channel must be provided");
Assert.notNull(deliveryTag, "Delivery tag must be provided");
channel.basicAck(deliveryTag, false);
}
catch (IOException e) {
throw new MessagingException("Cannot ACK message", e);
}
}
@Override
protected void afterRelease(MessageGroup messageGroup, Collection<Message<?>> completedMessages) {
Object groupId = messageGroup.getGroupId();
MessageGroupStore messageStore = getMessageStore();
messageStore.completeGroup(groupId);
messageGroup.getMessages().forEach(m -> {
Channel channel = (Channel)m.getHeaders().get(AmqpHeaders.CHANNEL);
Long deliveryTag = (Long)m.getHeaders().get(AmqpHeaders.DELIVERY_TAG);
ackMessage(channel, deliveryTag);
});
if (this.expireGroupsUponCompletion) {
remove(messageGroup);
}
else {
if (messageStore instanceof SimpleMessageStore) {
((SimpleMessageStore) messageStore).clearMessageGroup(groupId);
}
else {
messageStore.removeMessagesFromGroup(groupId, messageGroup.getMessages());
}
}
}
}
UPDATE
I managed to do it after your help. Most important parts: Connection factory must have factory.setPublisherConfirms(true). AmqpOutboundEndpoint must have this two settings: outboundEndpoint.setConfirmAckChannel(manualAckChannel()) and outboundEndpoint.setConfirmCorrelationExpressionString("#root"), and this is implementation of rest of classes:
public class ManualAckPair {
private Channel channel;
private Long deliveryTag;
public ManualAckPair(Channel channel, Long deliveryTag) {
this.channel = channel;
this.deliveryTag = deliveryTag;
}
public void basicAck(){
try {
this.channel.basicAck(this.deliveryTag, false);
}
catch (IOException e) {
e.printStackTrace();
}
}
}
public abstract class AbstractManualAckAggregatingMessageGroupProcessor extends AbstractAggregatingMessageGroupProcessor {
public static final String MANUAL_ACK_PAIRS = PREFIX + "manualAckPairs";
@Override
protected Map<String, Object> aggregateHeaders(MessageGroup group) {
Map<String, Object> aggregatedHeaders = super.aggregateHeaders(group);
List<ManualAckPair> manualAckPairs = new ArrayList<>();
group.getMessages().forEach(m -> {
Channel channel = (Channel)m.getHeaders().get(AmqpHeaders.CHANNEL);
Long deliveryTag = (Long)m.getHeaders().get(AmqpHeaders.DELIVERY_TAG);
manualAckPairs.add(new ManualAckPair(channel, deliveryTag));
});
aggregatedHeaders.put(MANUAL_ACK_PAIRS, manualAckPairs);
return aggregatedHeaders;
}
}
and
@Service
public class ManualAckServiceActivator {
@ServiceActivator(inputChannel = "manualAckChannel")
public void handle(@Header(MANUAL_ACK_PAIRS) List<ManualAckPair> manualAckPairs) {
manualAckPairs.forEach(manualAckPair -> {
manualAckPair.basicAck();
});
}
}
Right, you don't need such a complex logic for the aggregator.
You simply can acknowledge them after the aggregator release - in the service activator in between aggregator and that AmqpOutboundEndpoint.
And right you have to use there basicAck() with the multiple flag to true:
@param multiple true to acknowledge all messages up to and
Well, for that purpose you definitely need a custom MessageGroupProcessor to extract the highest AmqpHeaders.DELIVERY_TAG for the whole batch and set it as a header for the output aggregated message.
You might just extend DefaultAggregatingMessageGroupProcessor and override its aggregateHeaders():
/**
* This default implementation simply returns all headers that have no conflicts among the group. An absent header
* on one or more Messages within the group is not considered a conflict. Subclasses may override this method with
* more advanced conflict-resolution strategies if necessary.
*
* @param group The message group.
* @return The aggregated headers.
*/
protected Map<String, Object> aggregateHeaders(MessageGroup group) {
If I ACK messages after aggregator release, isn't there possibility that messages does not reach AmqpOutboundEndpoint but I still ACK them (meaning they are lost from AMQP queue)? I know this is highly unlikely but still a problem?
??? You are acking receving messages. That's not related to the produced. Or do you want to ack only in case you send to the some further AMQP queue properly?
Yes, I want to ACK receiving messages (in AmqpInboundChannelAdapter(AcknowledgeMode.MANUAL)). Basically wait 10 seconds in order to aggregate messages and release them to new exchange. If release (to AMQP exchange) is successful, only then ACK received messages (on received AMQP queue). I need some kind of message throttler/aggregator here.
OK. So, use those AmqpHeaders.CHANNEL and the highest AmqpHeaders.DELIVERY_TAG headers as options for the basicAck() in the AmqpOutboundEndpoint's confirmCalback. See its confirmAckChannel: https://docs.spring.io/spring-integration/docs/4.3.12.RELEASE/reference/html/amqp.html#amqp-outbound-channel-adapter
is highest the correct here? Since i can aggregate say out of 1 2 3 4 messages only (2,3,4) and send them, while 1 still waits to be aggregated and sent. I suppose that if I use highest, I loose 1?
OK. So you can build a custom header like deliveryTags for all the numbers of the messages to aggregate and use basicAck() in the loop to ack them all separately.
The same aggregateHeaders() might helpful here
Thanks for all the help. Could you just tell me how to use setConfirmAckChannel? Should it be set to amqpInputChannel in my case? Where to create this loop for acking - in the setConfirmCorrelationExpression?
setConfirmCorrelationExpressionString must be #root to refer to the whole requestMessage, so you will be able to get those desired headers there. setConfirmAckChannel must be some new channel with the ServiceActivator subscribed to that channel with the logic to loop over deliverTags header in its input message.
You can take a look about confirms in this sample: https://github.com/spring-projects/spring-integration-samples/tree/master/basic/amqp
Let us continue this discussion in chat.
Please, raise a new SO question - to much discussion to confuse others
|
STACK_EXCHANGE
|
story ikhinehi what is Php programming20 July 2012Comment
karan j It's a Server-Side Scripting language to build Dynamic webpages.. 0008 September 2012
andy chuksHow to Delete Rows Across Multiple Mysql Tables? I have two tables containing similar data, I want to delete from one of the tables where the it contains same data as the other table. I used the script below an ran into bogus query that got my mysql server restarted. How to I do this better?
DELETE table1.link FROM table1 INNER JOIN table2 WHERE table1.link=table2.link01 November 2011CommentView All 4 Answers
shegun babs That's not the proper way to use DELETE in SQL queries. You can only use SELECT that way.
You'll have to delete both records with separate SQL statements otherwise you should use a foreign key constraints to reference the foreign key and ON DELETE CASCADE.
CREATE TABLE parent (id INT NOT NULL,
PRIMARY KEY (id)
CREATE... Read All 0014 April 2012
ogugua belonwu @shegun babs, in this case a foreign key constraint will not work as there will be records that do not obey the foreign key on second table. 0014 April 2012
ogugua belonwuHow Do You Run Your Queries? Ok, this should be like a discussion!
I had several problems trying to process mysql queries. Some of the problems include multi-lingual support, ensuring codes are secure enough against XSS, and lots more (luckily, i have been able to sort out some of those).
So, the question are:
1. How do you run your mysql queries? Do you use mysql_query functions... Read All31 October 2011Comment
andy chuks Ogugua How do you display in UTF-8 Format on webpage after retrival? 0001 November 2011
ogugua belonwu How i do it:
- Make sure my db supports utf8
- i store my my data in its pure form ie. do not use htmlentities etc.
- make sure the charset of my page is utf8
- use prepared statement to avoid injection 0001 November 2011
obana giediaQ BASIC write a q basic program to use trapizoidal rule to solve the simple problem on numerical integration involving 8 variable13 August 2011Comment
andy chuks Don't know qbasic but think this might give you a hint
Quick-basic program to find the value of the integral 1 dt S0 (t 2 + 1) (3t 2 + 4) using Trapezoidal rule with n = 6
REM "Trapezoidal rule" CLS DEF fnf (t) = 1 / (SQR((t ^ 2 + 1) * (3 * t ^ 2 + 4))) INPUT "low level of integral";
a INPUT "high level of integral";... Read All 0014 August 2011
obana giedia tank u, u are 2 much 0011 February 2012
JamesOne SmithNew features in Version 2011! Easy-to-use, graphical data mapping interface
Instant data transformation
XSLT 1.0/2.0 and XQuery code generation
Java, C#, and C++ code generation
Advanced data processing functions
Support for all major relational databases including SQL Server, IBM DB2, Oracle, and more
Integration with Altova StyleVision for report rendering
Visual Studio &... Read All26 July 2011Comment
ogugua belonwu what re we looking at here? 0002 November 2011
ogugua belonwuWhy is PHP referred to as a server side language Each time we read that PHP is a server side language, for some starters this may be quite strange, to some its now a kind of slogan which one should cram.
Ok, this post is here to make you understand the actual difference between server side programming and client side programming.
When you request for a page ( i mean when you type the website address... Read All06 November 2010Comment
andy chuksPHP Tricks: Referral Url Uses Show Users How they got to your site -
$referer = $_SERVER['HTTP_REFERER'];
echo "You reached this site via " . $referer;
Send users to a certain page depending on where they come from to your site.
The script below... Read All27 October 2010Comment
ogugua belonwu How can i know the location a person is visiting from using PHP? 1029 October 2010
ogugua belonwuPhp Hosting I have a php application and i want to host on windows server, is it possible?
If it is possible is there anything i need to do to enable it work optimally?04 October 2010Comment
Aminat Abiodun Lawalwhat do i do? pls, i need help ,i am learning php and i am using wampserver, but i have some files which was designed using dreamweaver and its database, i can preview d individual pages,i.e the standalone pages with any server e.g wampserver, Appserver etc. but i can't preview its index with wampserver and i don't want to use Appserver, i.e the page that contain... Read All07 September 2010CommentView All 10 Answers
ogugua belonwu my sis, while designing ur websites, your pages are meant to hv all links to other pages, not a single page should have links to oda pages... 0018 September 2010
oluwaseun ajayi when you navigate to the page that has all the links, what did u see there? 0028 November 2010
Aminat Abiodun Lawalphp i need help on php07 September 2010Comment
aidee udoh THIS AN EMERGENCY, PLS. I REALLY NEED TO KNOW PHP NOW. 2009 September 2010
ogugua belonwu at what point are u stuck? what exactly do we help u on 0018 September 2010
stan aligbephp i want to learn php too26 August 2010Comment
aladeokin oluwabunmii am new to PHP programming. Andy your questions have really been helpful for a begginner like me.i am a novice when it comes to programming but i would like a step by step method to learning PHP programming so as we study it step by step, we can come on here and ask questions.
OG may God bless you and thanks for impacting us.24 August 2010CommentView All 3 Answers
Then you can proceed to learn either by getting a good tutor or studying on your own.
You can download PHP materials from websites like www.4shared.com.
If you encounter problems, feel free to post your problems here
Good luck. 1015 August 2010
ogugua belonwu As a general-purpose scripting language that is especially suited to server-side web development where PHP generally runs on a web server. Any PHP code in a requested file is executed by the PHP runtime, usually to create dynamic web page content. It can also be used for command-line scripting and client-side GUI applications. PHP can be deployed on... Read All 0002 August 2010
andy chuks Pshew!! PHP is really broad. Thank OG for the info 0002 August 2010
|
OPCFW_CODE
|
Function components cannot be given refs
What RMWC Version are you using [major.minor.patch]: 14.0.7
Name your build system [Webpack, Rollup...]: pnpm with vite
Describe the bug with as much detail as possible:
I am getting the following error, but I'm not using any ref(s)
Warning: Function components cannot be given refs. Attempts to access this ref will fail. Did you mean to use React.forwardRef()?
Check the render method of `j`.
at j (http://localhost:3000/node_modules/.vite/deps/chunk-U6EQ7XH4.js?v=03693eb0:1056:15)
at http://localhost:3000/node_modules/.vite/deps/chunk-U6EQ7XH4.js?v=03693eb0:1120:16
at div
at http://localhost:3000/node_modules/.vite/deps/chunk-IBWRNP3F.js?v=03693eb0:561:58
at Xe (http://localhost:3000/node_modules/.vite/deps/chunk-6KTHFJFM.js?v=03693eb0:5236:13)
at http://localhost:3000/node_modules/.vite/deps/chunk-6KTHFJFM.js?v=03693eb0:5241:18
at div
at div
at GroupSelector (http://localhost:3000/src/components/GroupSelector/GroupSelector.tsx:22:3)
at div
at div
at Home (http://localhost:3000/src/scenes/Home.tsx?t=1707129704342:32:36)
at div
at Nt (http://localhost:3000/node_modules/.vite/deps/@x-act_ui.js?v=272820f1:50776:10)
at ta (http://localhost:3000/node_modules/.vite/deps/@x-act_ui.js?v=272820f1:50807:13)
at div
at div
at wt (http://localhost:3000/node_modules/.vite/deps/@x-act_ui.js?v=272820f1:51073:12)
at oa (http://localhost:3000/node_modules/.vite/deps/@x-act_ui.js?v=272820f1:51024:12)
at LoggedInPage (http://localhost:3000/src/App.tsx?t=1707129704342:60:40)
at App (http://localhost:3000/src/App.tsx?t=1707129704342:38:3)
at dl (http://localhost:3000/node_modules/.vite/deps/@x-act_ui.js?v=272820f1:51426:13)
at X (http://localhost:3000/node_modules/.vite/deps/@x-act_react-utils.js?v=f81ec410:141:11)
at W (http://localhost:3000/node_modules/.vite/deps/@x-act_react-utils.js?v=f81ec410:30:13)
at QueryClientProvider (http://localhost:3000/node_modules/.vite/deps/@tanstack_react-query.js?v=22c5fec2:2583:3)
at Router (http://localhost:3000/node_modules/.vite/deps/chunk-W2K2MWS2.js?v=03693eb0:3925:15)
at BrowserRouter (http://localhost:3000/node_modules/.vite/deps/chunk-W2K2MWS2.js?v=03693eb0:4660:5)
What happened, and what was supposed to happen:
The error started appearing but unfortunately I don't know when exactly. But I don't think it should ever have been present. I'm suspecting that it is from rmwc somewhere...
@EmiBemi As you mentioned, this has already been fixed:
https://github.com/rmwc/rmwc/pull/1372/files#diff-f37a37b3d7d9a35bde510dec5925e91640816087541b38928e6604298e2e9fff
|
GITHUB_ARCHIVE
|
I’m 13 and I’m interested in Judaism. (Reform)
My name is Charlie, and for a long time, since sixth grade, I’ve been heavily interested in Judaism. I’m not looking to convert just yet. I wanna wait till I’m a full adult. But I have some questions and fears.
I have extremely unsupportive parents. I wanted to celebrate Yom Kippur, but I can’t go to a synagogue. I’m also not good at cooking, and I don't think my mom wants me celebrating Hanukkah either because even though she said she would buy me a menorah, she keeps talking about Christmas around me to mock me. It hurts a lot. :( I feel so hopeless, and I have no support groups or Jewish friends to help me. I feel so invalid and fake, and it hurts so much.
How can a person in my situation learn and practice more Judaism?
Learn as much as you can on your own, and when you become an adult you can make your own decisions. You have your whole life ahead of you!
Not sure if your asking a question?
Try to make a Jewish friend who knows and practices Judaism.
Try to find meaning within the context of your own religion. Don't antagonize your mother.
Learn about the 7 laws of Noach which were given to all non-Jews, that is what God expects from you (start here: https://www.chabad.org/search/keyword_cdo/kid/2123/jewish/Noahide-Laws-The-Seven.htm)
Many people have difficulty hearing that their children are interested in converting to other religions. Many of the world's religions have a rich history and culture, and to practitioners deciding to converting away seems like spitting on that rich history and culture. It sounds like your mother is not too pleased in your wanting to convert, and it's quite understandable. Try to understand your mother.
I recommend that as long as you live by your parents, you should respect their religion, downplay your interest in Judaism (or even not bring it up at all), and meanwhile read up on Judaism and Noahidism on your own, as a kind of hobby, to see whether you'd really be up for it once you've come of age and are out of the house.
Take heart; our forefather Avraham also grew up in a family that was, shall we say, less than supportive of his attempts to discover G-d (they had him thrown in jail and threatened with execution). Aside from all of the good advice in the other comments, pray to G-d that He lead you in the right path, and may He indeed do so.
Most rabbis will not convert a minor, especially without parental approval or permission. Use these years to privately learn anything you can about our beautiful faith. Here’s a good website to start you off: My Jewish Learning.
There are some Reform and Conservative synagogues that livestream their services. Some of these livestreams are available to the public online; others are private and require you to contact the rabbi to gain access. Just Google “synagogue livestream”.
Opening a line of communication with your local rabbi would be a good idea. Be sure to explain your situation and desire to learn. Though you can’t convert right now, perhaps the rabbi would be able to send you some helpful resources to stimulate your growth in the meantime.
Familiarize yourself with the Seven Laws of Noah. You don’t have to be Jewish to worship and follow the Creator.
Finally, the community here on Mi Yodeya is ready to answer any questions you may have. Be sure to utilize the search function to ensure there is not already an existing question.
Good luck in your journey, and may God always guide you throughout your life!
I'm not so sure that MJL is a good website for the purpose, because a lot of the information there won't necessarily be a good preparation for an Orthodox conversion, which is the only kind that will be accepted by all. Better sites might be chabad.org or aish.com.
@Meir OP was pretty clear from the title they are not interested in an Orthodox conversion. In any case, MJL always presents the traditional approach in every article. Let OP make their own decision as they develop and learn more about Judaism. Chabad and Aish also are extremely biased even from an Orthodox perspective. They just have opinions that line up more with what you believe, which is why you would naturally suggest them.
|
STACK_EXCHANGE
|
LTTNG-UST-DL(3) LTTng Manual LTTNG-UST-DL(3)
lttng-ust-cyg-profile - Function tracing (LTTng-UST helper)
Compile your application with compiler option -finstrument- functions. Launch your application by preloading liblttng-ust-cyg-profile- fast.so for fast function tracing: $ LD_PRELOAD=liblttng-ust-cyg-profile-fast.so my-app Launch your application by preloading liblttng-ust-cyg-profile.so for slower, more verbose function tracing: $ LD_PRELOAD=liblttng-ust-cyg-profile.so my-app
When the liblttng-ust-cyg-profile.so or the liblttng-ust-cyg- profile-fast.so library is preloaded before a given application starts, all function entry and return points are traced by LTTng-UST (see lttng-ust(3)), provided said application was compiled with the -finstrument-functions compiler option. See lttng(1) to learn more about how to control LTTng tracing sessions. Function tracing with LTTng-UST comes in two flavors, each one providing a different trade-off between performance and robustness: liblttng-ust-cyg-profile-fast.so This is a lightweight variant that should only be used where it can be guaranteed that the complete event stream is recorded without any missing events. Any kind of duplicate information is left out. At each function entry, the address of the called function is recorded in an LTTng-UST event. Function exits are recorded as another, empty LTTng-UST event. See the Fast function tracing section below for the complete list of emitted events and their fields. liblttng-ust-cyg-profile.so This is a more robust variant which also works for use cases where events might get discarded, or not recorded from application startup. In these cases, the trace analyzer needs extra information to be able to reconstruct the program flow. At each function entry and exit, the address of the called function and the call site address are recorded in an LTTng-UST event. See the Verbose function tracing section below for the complete list of emitted events and their fields. Usage To use LTTng-UST function tracing, you need to make sure the sources of your application are compiled with the -finstrument- functions compiler option. It might be necessary to limit the number of source files where this option is used to prevent excessive amount of trace data to be generated at run time. Usually, there are additional compiler flags that allow you to specify a more fine-grained selection of function instrumentation. For each instrumented function, the executable will contain calls to profiling function hooks (after function entry, named __cyg_profile_func_enter(), and just before function exit, named __cyg_profile_func_exit()). By preloading (using the LD_PRELOAD environment variable) one of the provided shared libraries, these profiling hooks get defined to emit LTTng events (as described below). Note Using this feature can result in a massive amount of trace data to be generated by the instrumented application. Application run time is also considerably affected. Be careful on systems with limited resources. Fast function tracing The following LTTng-UST events are available when using liblttng- ust-cyg-profile-fast.so. Their log level is set to TRACE_DEBUG_FUNCTION. lttng_ust_cyg_profile_fast:func_entry Emitted when an application function is entered, or more specifically, when __cyg_profile_func_enter() is called. Fields: ┌───────────┬───────────────────┐ │Field name │ Description │ ├───────────┼───────────────────┤ │func_addr │ Function address. │ └───────────┴───────────────────┘ lttng_ust_cyg_profile_fast:func_exit Emitted when an application function returns, or more specifically, when __cyg_profile_func_exit() is called. This event has no fields. Since the liblttng-ust-cyg-profile- fast.so library should only be used when it can be guaranteed that the complete event stream is recorded without any missing events, a per-thread, stack-based approach can be used on the trace analyzer side to match function entry and return events. Verbose function tracing The following LTTng-UST events are available when using liblttng- ust-cyg-profile.so. Their log level is set to TRACE_DEBUG_FUNCTION. lttng_ust_cyg_profile:func_entry Emitted when an application function is entered, or more specifically, when __cyg_profile_func_enter() is called. Fields: ┌───────────┬─────────────────────────┐ │Field name │ Description │ ├───────────┼─────────────────────────┤ │func_addr │ Function address. │ ├───────────┼─────────────────────────┤ │call_site │ Address from which this │ │ │ function was called. │ └───────────┴─────────────────────────┘ lttng_ust_cyg_profile:func_exit Emitted when an application function returns, or more specifically, when __cyg_profile_func_exit() is called. Fields: ┌───────────┬─────────────────────────┐ │Field name │ Description │ ├───────────┼─────────────────────────┤ │func_addr │ Function address. │ ├───────────┼─────────────────────────┤ │call_site │ Address from which this │ │ │ function was called. │ └───────────┴─────────────────────────┘
If you encounter any issue or usability problem, please report it on the LTTng bug tracker <https://bugs.lttng.org/projects/lttng- ust>.
• LTTng project website <http://lttng.org> • LTTng documentation <http://lttng.org/docs> • Git repositories <http://git.lttng.org> • GitHub organization <http://github.com/lttng> • Continuous integration <http://ci.lttng.org/> • Mailing list <http://lists.lttng.org> for support and development: firstname.lastname@example.org • IRC channel <irc://irc.oftc.net/lttng>: #lttng on irc.oftc.net
This library is part of the LTTng-UST project. This library is distributed under the GNU Lesser General Public License, version 2.1 <http://www.gnu.org/licenses/old- licenses/lgpl-2.1.en.html>. See the COPYING <https://github.com/lttng/lttng-ust/blob/v2.10.6/COPYING> file for more details.
Thanks to Ericsson for funding this work, providing real-life use cases, and testing. Special thanks to Michel Dagenais and the DORSAL laboratory <http://www.dorsal.polymtl.ca/> at École Polytechnique de Montréal for the LTTng journey.
LTTng-UST was originally written by Mathieu Desnoyers, with additional contributions from various other people. It is currently maintained by Mathieu Desnoyers <mailto:email@example.com>.
lttng-ust(3), lttng(1), gcc(1), ld.so(8)
This page is part of the LTTng-UST ( LTTng Userspace Tracer) project. Information about the project can be found at ⟨http://lttng.org/⟩. It is not known how to report bugs for this man page; if you know, please send a mail to firstname.lastname@example.org. This page was obtained from the tarball lttng-ust-2.11.0.tar.bz2 fetched from ⟨https://lttng.org/files/lttng-ust/⟩ on 2019-11-19. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to email@example.com LTTng 2.10.6 10/17/2019 LTTNG-UST-DL(3)
Pages that refer to this page: lttng-ust(3)
|
OPCFW_CODE
|
We're building scalable software and platforms to enable efficient analysis of very large genetic data. All of our tools are open access and free for use by the scientific community.
The widespread application of massively parallel sequencing for complex trait analysis offers unprecedented power to link genetics with disease risk. However, these projects pose substantial challenges of scale and complexity, making even trivial analytic tasks increasingly cumbersome. To address these challenges we are actively developing Hail, an open-source framework for scalable genetic data analysis.
The foundation of Hail is infrastructure for representing and computing on genetic data. This infrastructure builds on open-source distributed computing frameworks including Hadoop and Spark. Hail achieves near-perfect scalability for many tasks and scales seamlessly to whole genome datasets of thousands of individuals. On top of this infrastructure, we have implemented a suite of standard tools and analysis modules including: data import/export, quality control (QC), analysis of population structure, and methods for performing both common and rare variant association. Simultaneously, we and other groups are using Hail to manage the engineering details of distributed computation in order to develop and deploy new methods at scale.
In addition, Hail exposes a high-level domain-specific language (DSL) for manipulating genetic data and assembling pipelines. As an example, porting a rare-variant analysis from Python to Hail reduced the number of lines of code by ~10x and improved performance by ~100x. We aim to grow Hail into a scalable, reliable and expressive framework on which the genetics community develops, validates, and shares new analytic approaches on massive datasets to uncover the biology of disease.
Picopili: Pedigree Imputation Consortium Pipeline
Family-based study designs can contribute valuable insights in genome-wide association studies (GWAS), but require different statistical considerations in quality control (QC), imputation, and analysis. Standardizing this process allows more efficient and uniform processing of data from these cohorts, facilitating inclusion of these family-based cohorts in meta-analyses. Therefore we've developed picopili (Pedigree Imputation Consortium Pipeline), a standardized pipeline for processing GWAS data from family-based cohorts.
Paralleling the design of ricopili, this pipeline supports QC, PCA, pedigree validation, imputation, and case/control association appropriate for family designs ranging from sib-pairs to complex, multigenerational pedigrees. Multiple association models are supported, including logistic mixed models and generalized estimating equations (GEE). Tasks are automatically parallelized, with flexible support for common cluster computing environments.
Code is available at: https://github.com/Nealelab/picopili
LD Hub is a centralized database of summary-level GWAS results for 173 diseases/traits from different publicly available resources/consortia and a web interface that automates the LD score regression analysis pipeline. LD score regression is a reliable and efficient method of using genome-wide association study (GWAS) summary-level results data to estimate the SNP heritability of complex traits and diseases, partition this heritability into functional categories, and estimate the genetic correlation between different phenotypes.
LD Hub was developed collaboratively by Broad Institute of MIT and Harvard and MRC Integrative Epidemiology Unit, University of Bristol. The site is hosted by the Broad Institute. Major developers include Jie Zheng, Tom Gaunt, David Evans and Benjamin Neale.
|
OPCFW_CODE
|
import boxen from "boxen";
import chalk from "chalk";
import wrapAnsi from "wrap-ansi";
import jsonColorizer from "json-colorizer";
import { centerText } from "../ui/center-text";
import {
getUsableTerminalSize,
TerminalSize,
} from "../ui/get-usable-terminal-size";
import { fillLastLine } from "../ui/fill-last-line";
/**
* Prints getting started guidance for the customization of the generated
* coat files.
*
* On tiny terminal sizes, no help text is printed since it would overwhelm
* the terminal log.
*/
export function printCreateCustomizationHelp(): void {
const usableTerminalSize = getUsableTerminalSize(process.stdout);
// Don't print any customization help if the terminal window is tiny
if (usableTerminalSize.size === TerminalSize.Tiny) {
return;
}
// prettier-ignore
const customizationExplanation = [
chalk`Files that are generated by {cyan coat} will be continuously kept up to date via your {cyan coat} template.`,
chalk`This means that you can't edit these files directly, because they will be overwritten when running {cyan coat sync}.`,
"",
chalk`However, {cyan coat} allows you to customize the final file by creating a {green <filename>-custom.js} file next to the file you want to customize.`,
"",
chalk`As an example, a configuration file named {green config.json} can be customized by placing a {green config.json-custom.js} file next to it.`,
"The export will be merged into the file provided by coat:",
].join('\n')
const maxWidth = usableTerminalSize.width;
const borderConfig: boxen.Options = {
dimBorder: true,
padding: 1,
};
// Colorize JSON examples and put them into boxes
const exampleConfigs = [
// config.json from coat
jsonColorizer(
JSON.stringify({ coatConfig: "value from your coat template" }),
{ pretty: true }
),
// config.json-custom.js
`module.exports = ${jsonColorizer(
JSON.stringify({
customConfig: "value specified by you",
}),
{ pretty: true }
)}`,
// Resulting config.json
jsonColorizer(
JSON.stringify({
coatConfig: "value from your coat template",
customConfig: "value specified by you",
}),
{ pretty: true }
),
].map((exampleConfig) =>
// place example config in a box and fill
// the last line of the string to have
// equally sized boxes
boxen(fillLastLine(exampleConfig, maxWidth), borderConfig)
);
const text = [
centerText("💡 Before you get started 💡", maxWidth),
"",
wrapAnsi(customizationExplanation, maxWidth),
"",
chalk.dim("// config.json from coat"),
exampleConfigs[0],
"",
chalk.dim("// config.json-custom.js"),
exampleConfigs[1],
"",
chalk.dim("// config.json that will be placed"),
exampleConfigs[2],
].join("\n");
// TODO: See #52
// Add link to customization documentation once it exists
// Place customization help into a box
const customizationTextBox = boxen(text, {
...borderConfig,
float: "center",
});
console.log(customizationTextBox);
}
|
STACK_EDU
|
use schema_core::schema_api;
use sql_migration_tests::{multi_engine_test_api::*, test_api::SchemaContainer};
use test_macros::test_connector;
use url::Url;
#[test_connector(tags(Postgres))]
fn connecting_to_a_postgres_database_with_missing_schema_creates_it(api: TestApi) {
// Check that the "unexpected" schema does not exist.
{
let schema_exists_result = api
.query_raw(
"SELECT EXISTS(SELECT 1 FROM pg_namespace WHERE nspname = 'unexpected')",
&[],
)
.unwrap();
let schema_exists = schema_exists_result
.into_single()
.unwrap()
.at(0)
.unwrap()
.as_bool()
.unwrap();
assert!(!schema_exists)
}
// Connect to the database with the wrong schema
{
let mut url: Url = api.connection_string().parse().unwrap();
let mut new_qs = String::with_capacity(url.query().map(|q| q.len()).unwrap_or(16));
for (k, v) in url.query_pairs() {
if k == "schema" {
new_qs.push_str("schema=unexpected&");
} else {
new_qs.push_str(&k);
new_qs.push('=');
new_qs.push_str(&v);
new_qs.push('&');
}
}
url.set_query(Some(new_qs.trim_end_matches('&')));
let provider = api.provider();
let schema = format!(
r#"
datasource db {{
provider = "{provider}"
url = "{url}"
}}
"#
);
let me = schema_api(Some(schema.clone()), None).unwrap();
tok(
me.ensure_connection_validity(schema_core::json_rpc::types::EnsureConnectionValidityParams {
datasource: schema_core::json_rpc::types::DatasourceParam::SchemaString(SchemaContainer { schema }),
}),
)
.unwrap();
}
// Check that the "unexpected" schema now exists.
{
let schema_exists_result = api
.query_raw(
"SELECT EXISTS(SELECT 1 FROM pg_namespace WHERE nspname = 'unexpected')",
&[],
)
.unwrap();
let schema_exists = schema_exists_result
.into_single()
.unwrap()
.at(0)
.unwrap()
.as_bool()
.unwrap();
assert!(schema_exists)
}
}
#[test_connector(exclude(Sqlite))]
fn ipv6_addresses_are_supported_in_connection_strings(api: TestApi) {
let url = api.connection_string().replace("localhost", "[::1]");
assert!(url.contains("[::1]"));
let provider = api.provider();
let schema = format!(
r#"
datasource db {{
provider = "{provider}"
url = "{url}"
}}
"#
);
let engine = schema_api(Some(schema.clone()), None).unwrap();
tok(
engine.ensure_connection_validity(schema_core::json_rpc::types::EnsureConnectionValidityParams {
datasource: schema_core::json_rpc::types::DatasourceParam::SchemaString(SchemaContainer { schema }),
}),
)
.unwrap();
}
|
STACK_EDU
|
Ie11 Script5 Access Is Denied
Edit: possibly similar to this issue in respond.js: scottjehl/Respond#5 (comment) scottjehl/Respond#6 Member sgarbesi commented Sep 18, 2014 @civkati I'll take a look tomorrow. Please, refer to the screenshot attached. For some reason, it seems that the CSS files that the polyfill is trying to load are located on a separate domain. I had to do the same for the the jQuery-UI script I had just updated as well share|improve this answer answered Aug 19 '12 at 14:24 Hooded 91 2 I Source
I think it might have something to do with how I am rewriting my domain name from "www" to "non-www" in Nginx, but I'm not sure. This issue is not particular to the polyfill, as it works on all the other properties I have tested, but rather appears to be related to the way your files are PRODUCTS Complete Solutions Complete .NET Toolbox DevCraft Deliver awesome UI with the richest .NET toolbox Mobile App Development Telerik Platform Develop cross-platform mobile applications Web Content & Customer Journey Progress Sitefinity I’ve set up a really basic example at http://plnkr.co/aMjuPtA8XGHEzBwXTIqj/. http://stackoverflow.com/questions/22098259/access-denied-in-ie-10-and-11-when-ajax-target-is-localhost
Ie11 Script5 Access Is Denied
Hello<\/p>scriptdo things;<\/script>'"; But for some reason, such a document is unable to set its own document.domain from script I have this working fine, with one serious exception - if the document.domain property is set in the parent page (which it may be in certain environments in which this widget share|improve this answer answered Sep 2 '14 at 14:26 Martin Laukkanen 33037 1 I know you're not supposed to use comments just to say thanks, but...
The example loads the CSS file from a different subdomain, notably rem.lsvx.com, and the polyfill works successfully in IE8. @cbracco, I did some testing with your case in particular and found Not sure why. What's the male version of "hottie"? Xmlhttprequest: Network Error 0x80070005, Access Is Denied. Edge I’ve updated the plnkr example to load a fallback: http://plnkr.co/aMjuPtA8XGHEzBwXTIqj and here’s a version where the CDN is a bad URL so won’t ever load: http://plnkr.co/g77iBUFFkBrUqQONY8ZF In IE11 it fails to
See Trademarks or appropriate markings. Tested the site. Try: troy.onespot.com/static/access_denied_jquery.html –Bungle Dec 11 '09 at 8:39 4 Why would jQuery have magical access to the iframe's document? –Tim Down Dec 11 '09 at 9:17 8 @Tim Down: Terms Privacy Security Status Help You can't perform that action at this time.
Thanks, Lucas cbracco commented Feb 21, 2013 Fantastic explanation, thanks Lucas. Internet Explorer Access Denied By Security Policy AljanScholtens commented Mar 18, 2013 Any news? I'm using JSONP to load some HTML content from a server and insert it in this
Ie11 Access Is Denied
Tested your referenced commit and it did indeed work just fine. CORS is not working unless from localhost9CORS with IE11+ Access Denied with SSL to localhost0IE11 XMLRequest Access is Denied2Angular.js xhr.open() throws 'Access is Denied' error1IE 11 error - Access is denied Ie11 Script5 Access Is Denied You can test this by adding "http://client.cors-api.appspot.com" to your "Trusted sites" zone and using this test page at test-cors.org with your localhost site as the Remote URL. Xmlhttprequest Open Access Is Denied argh.
What do you mean by IE trying to retain the
You can use MAMP, which is pretty easy or use from thing like python simple server. I agree it didn't bode well, but I think it was a good suggestions and worth a shot. –Bungle Dec 11 '09 at 10:14 This really isn't an answer. The "denied access" is thrown instantly. Script5 Access Is Denied Iframe After that, I need to insert some content into the
This only applies to IE, Chrome just politely logs a warning in the debug console and doesn't fail. Why do shampoo ingredient labels feature the the term "Aqua"? "How are you spending your time on the computer?" Bash remembers wrong path to an executable that was moved/deleted Hacker used Collaborator lsvx commented Feb 21, 2013 Chris, Of course, changing the settings in IE was only to help debug, not a proposed solution to the problem :)! Check This Out Any help is greatly appreciated Enjoy your weekend!
I suspect it's the same or a similar roadblock. –Bungle Dec 11 '09 at 8:38 1 Sorry, that URL got mangled. Collaborator lsvx commented Feb 21, 2013 Chris, Just this morning, @chuckcarpenter noticed that on some of the properties using rem.js IE8 was throwing Access Denied errors where previously it wasn't.
|
OPCFW_CODE
|
> Cannot Open
> Cannot Open Virtual Storage Driver
Cannot Open Virtual Storage Driver
Or is it preferable to have one disk on each controller rather than having both on the same IDE controller? 3 years ago Reply Jon Howell Q.Q I want to install Hyper-V does not support the use of storage media if Encrypting File System has been used to encrypt the .vhd file. http://forums.virtualbox.org/viewtopic.php?t=9575 has been locked instead of allowing followups and is not as useful as it could be. Use three forward slashes to connect to the local host. http://fortecrm.net/cannot-open/cannot-open-secure-digital-storage-device.html
Read the Forum Posting Guide before opening a topic.VirtualBox FAQ: Check this before asking questions.Online User Manual: A must read if you want to know what we're talking about.Howto: Install Linux This is often a result of a long forward delay time set for the bridge, or when the iptables package and kernel do not support checksum mangling rules. If this is the case, refer to Section A.18.8, “PXE Boot (or DHCP) on Guest Failed” for further details on this situation. A.18.11. Unable to add bridge br0 port vnet0: No such device Let say a bar exist and in one extreme we have EMULATION, in the other we have SIMULATION. https://communities.vmware.com/thread/154679?start=0&tstart=0
Instead we implemented our traditional emulated IDE controller and a new completely virtual, VMBUS based, storage controller - with no traces of emulation present. So I finally got it to run with 126.96.36.1992 (Intel(R) Matrix Storage Manager Driver) and VirtualBox 2.0.6. Dec 2008, 14:36 While 7.8.0 has been recommended here and elsewhere, I find it strange that Intel has completely pulled it from their site some weeks ago.
WHY MUST I GET WRITE ERRORS! 3 years ago Reply Fernando Hi Ben. Please type your message and try again. 1 Reply Latest reply: Jul 2, 2008 6:14 PM by kneese VCB: cannot open virtual storage driver? vbox4me2 Volunteer Posts: 5218Joined: 21. kneese Jul 2, 2008 5:24 PM Last week we started having problems with our working VCB server.
The virtual machine must have exclusive access to the storage, so the storage must be set in an Offline state in Disk Management. Worked for me too! The following snippet does not follow this rule and has produced the error message shown above: ... This error is caused by mismatched XML tags in the find this To access the guest's XML for editing, use the following command: # virsh edit name_of_guest.xml This command opens the file in a text editor with the current definition of the guest
Errors in files created by libvirt are rare. To address this we made the IDE controller in Hyper-V use 48-bit LBA. Open Hyper-V Manager. I'm sure I'm not the only one saying "Please remedy this!!!!" © 2016 Microsoft Corporation.
If the disk is not in an Offline state, it will not be available when configuring storage for a virtual machine. http://agakenyxug.velozservers.com/?iy=cannot-open-virtual-storage-driver For the Zoom 5341G or 5341H try 192.168.100.1/rf_in... Is this not the case on Hyper-V? This is actually not an error — it is the defined behavior of macvtap.
Browse Windows Server Technologies Hyper-V Configuration Configuration Configuring Disks and Storage Configuring Disks and Storage Configuring Disks and Storage Configuring Virtual Networks Configuring Disks and Storage Hyper-V: Live Migration Network Configuration this contact form But even more, all the things that are out of emulation, means that you need to do the work that that piece of hardware does, and I don't suggest you to When these conditions occur, UDP packets sent from the host to the guest have uncomputed checksums. Note For more information on SELinux, refer to the Red Hat Enterprise Linux 7 SELinix User's and Administrator's Guide. A.18.13. Migration Fails with error: unable to resolve address Symptom QEMU guest migration fails
Now, the parent partition diagram is not 100% correct for Windows Server 2008 R2 – but the child partition diagram is accurate for both Windows Server 2008 and Windows Server 2008 File names that are not contained in parentheses are local files that reside on the target of the connection. 6 This is the line number in the XML file that contains Manage Your Profile | Site Feedback Site Feedback x Tell us about your experience... http://fortecrm.net/cannot-open/cannot-open-youbot-driver.html If the XML is correct, the following message is displayed: # virsh edit name_of_guest.xml Domain name_of_guest.xml XML configuration edited.Important When using the edit command in virsh to edit an XML document,
This entry provides instructions for editing guest XML definitions, and details common errors in XML syntax and configuration. Note This solution applies only if the bridge is not used to connect multiple networks, but just to connect multiple endpoints to a single network (the most common use case for But there's one annoying thing left: Windows always shows the "safely remove hardware" icon in the lower left.
If you have not yet created the virtual machine where you want to attach the physical hard disk, create it by using the New Virtual Machine Wizard in Hyper-V Manager, and
This can be useful if you want to install a guest operating system in a virtual machine soon after you create it. Nov 2008, 00:18 Try http://drivers.softpedia.com/get/Other- ... 12-C.shtml for an older version. This error message highlights the XML error — in this case, an extra white space within the word type — with a pointer. These XML examples will not Check if the address is correct.
If the 192.168.254.0/24 network is already in use elsewhere on your network, you can choose a different network. ... isolated Regards Oliver OliverO Posts: 7Joined: 28. For instructions, refer to Section 6.4.3, “Bridged Networking with libvirt”. A.18.12. Guest is Unable to Start with Error: warning: could not open /dev/net/tun Symptom The guest virtual machine does not start after configuring http://fortecrm.net/cannot-open/cannot-open-sit.html If you are creating dynamically expanding disks, the New Virtual Machine Wizard provides a way to create storage for the new virtual machine without running the New Virtual Hard Disk Wizard.
This can happen if DNS is not properly configured or /etc/hosts has the host name associated with local loopback address (127.0.0.1). node-red-contrib-freeboard a npm node by urbiworx The instructions for node-red-contrib-freeboard say 'Just install this plug... Add or edit the following lines in the /etc/sysconfig/network-scripts/ifcfg-name_of_bridge file to turn STP on with a 0 second delay: STP=on DELAY=0 After changing the configuration file, restart the bridge device: Thanks for your answer.
Dec 2008, 18:30, edited 1 time in total. In this case, the guest log will show an attempt to use -incoming as one of its arguments, meaning that libvirt is trying to start QEMU by migrating in the saved You could use some HLE at first to get improved and get replacing the code eventually with better emulated system. Under Media, specify the physical hard disk.
The file name is valid only on the host machine defined by the URI, which may refer to the machine the command was run on. SCCM client will not uninstall using ccmsetup.exe /uninstall I had a system where the SCCM client was installed but the client was not functioning correctly. It is this virtual storage controller that you are adding to a virtual machine when you choose to add a SCSI controller to a virtual machine. I suppose there's a difference in that one is installed within the guest VM and the other is a part of the virtual hardware.
Or even better, you had emulated a real Adaptec adapter (Virtual Server), you have half of the path walked, you only need to implement it or get better for Hyper-V. Although disk images are not transferred during migration, they need to remain accessible at the same path by both hosts. Solution Set up and mount shared storage at the same location three slots attached to the traditional IDE controller, which are always present(plus one for the virtual CD-ROM device);2. There are two limitations that remain for IDE disks: Disk commands to IDE disks on the same controller are serialized by the guest operating system (note that you can only have
install guest additions site:forums.virtualbox.orgRetired from this Forum since OSSO introduction. internal error cannot find character device (null)A.18.6. Start the virtual machines. A.18.8. PXE Boot (or DHCP) on Guest Failed Symptom A guest virtual machine starts successfully, but is then either unable to acquire an IP address from DHCP or
|
OPCFW_CODE
|
If you’re working with programming languages, markup languages or any other type of text files in Notepad++, you may encounter a common issue where your single quotes disappear or get replaced with double quotes. This can be frustrating and time-consuming to fix manually. However, you can keep your single quotes intact with a few simple steps in Notepad++. In this guide, we’ll show you how to prevent Notepad++ from automatically replacing or removing single quotes, as well as some related keywords and tips to help make your coding experience smoother.
Single quotes play a significant role in programming, especially when dealing with strings. Notepad++ is a popular text editor among programmers due to its advanced features and ease of use. However, one common issue faced by programmers using Notepad++ is the automatic conversion of single quotes into double quotes. This can cause errors in code syntax and logic, leading to a frustrating experience for users.
Fortunately, there are several ways to keep your single quotes intact while using Notepad++. Here is a guide to help you overcome this problem:
1. Disable Auto-Completion: By default, Notepad++ is set up to autocomplete single quotes with double quotes. You can disable this feature by going to Settings > Preferences > Auto-Completion and unchecking the “Enable auto-completion on typing” option. This will prevent Notepad++ from automatically changing single quotes to double quotes.
2. Use Backslashes: Another way to keep your single quotes intact is to use backslashes before them. Backslashes act as escape characters and prevent Notepad++ from converting single quotes into double quotes. For example, instead of writing ‘Hello, World!’, you can write \’Hello, World!\’. The backslash before the single quote tells Notepad++ to interpret it as a literal character.
3. Use Double Quotes: If you prefer to use single quotes, another option is to use double quotes instead. Notepad++ does not automatically convert double quotes to single quotes, so you can use them without any issues. However, this may not be a practical solution in cases where you’re working with pre-existing code or following a specific coding style.
4. Use Plugins: Notepad++ has a vast library of plugins that can be installed to enhance its functionality. Some of these plugins are designed specifically to address the single quote problem. For instance, the “BetterQuotes” plugin allows you to easily change between single quotes and double quotes by pressing a hotkey. Similarly, the “NppCodeFormatter” plugin provides more control over how quotes are formatted in your code.
In conclusion, keeping your single quotes intact in Notepad++ is crucial to ensure accurate and error-free code. By following the steps outlined in this guide, you can overcome the single quote problem and enjoy a better programming experience with Notepad++. Whether you choose to disable auto-completion, use backslashes, or install plugins, there are different methods to suit your preferences and coding style.
|
OPCFW_CODE
|
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool.
Javaws Bus Error Sep 15, 2012 · Original title: java is not installing on windows vista. problems began with update 6 I believe. I run windows vista home premium 64 bit sp2 on dell.
By time, HP fixed that issue Saturday afternoon, the company’s e-commerce site served up a series of errors that looked like this: Microsoft VBScript compilation error ‘800a03e9’ Out of memory.
One of the biggest benefits of the Windows Script File format is that it allows you to combine code from both the VBScript and JScript scripting languages in a single. that specifies the attributes for error handling: <?job error="true".
VBS/VBA SCRIPT trap display log error message in. – Hello All I’ve written a vba/vbs script which opens up MIGO Good Issue and enters Production order, checks if Quantity, batch is correct, sets ITEM OK, and clicks on.
Error Handling Error handling does not involve finding errors in your scripts. Instead, use error handling techniques to allow your program to continue executing even.
There is little difference between the methods used in Visual Basic and those used with VBScript. The primary difference is that VBScript does not support the concept.
‘ LastLogon.vbs ‘ VBScript program to determine when each user in the domain last logged. January 25, 2004 – Modify error trapping. ‘ Version 1.8 – July 6, 2007.
Ftp Error 61 Discriminating among cattle breeds using genetic markers – Animals were assigned to the breed for which their genotype had the highest probability, and the power of the method assessed by
Can someone please tell me the command to stop or start a service in VBscript Many thanks
Using the VBScript Exit Code (.Status) The point of trapping the exit code (.status =1) is so that your script will wait for one event to complete before continuing.
VBScript – Wikipedia – VBScript is an Active Scripting language developed by Microsoft that is modeled on Visual Basic. It allows Microsoft Windows system administrators to generate powerful tools for managing computers with error handling, subroutines, and other advanced programming constructs. It can give the user complete control over.
What Is An Ssl Error On Ipad Ftp Error 61 Discriminating among cattle breeds using genetic markers – Animals were assigned to the breed for which their genotype had the highest probability, and the power of the
I am totally not a VBScript developer. But as it usually happens I have to write a small script to check something. It opens Excel, writes something to it and closes it.
Implementing Error Handling and Debugging Techniques for Microsoft Access, VBA, and Visual Basic application development.
Jun 19, 2009 · Forums, code samples, and other resources for programmers developing with Microsoft Outlook
To Err Is VBScript – Part 1 Handling Errors with VBScript Handling Errors in a Subroutine Passing Custom Error Messages to Subroutines Other Ways of Testing for.
I want to use VBScript to catch errors and log them (ie on error "log something") then resume the next line of the script. For example, On Error Resume Next 'Do Step.
Aug 26, 2016. Scripting: Error Handling and Debugging. SAP SCREEN PERSONAS KNOWLEDGE BASE – by Kranthi Kumar Muppala. Purpose. This article describes a few options available to handle errors that occur in scripts and a few tips on how to debug scripts. Overview. Errors can occur in scripts when an.
|
OPCFW_CODE
|
How to resolve path in a WIX custom action
This custom action fails .
<CustomAction Id="SetIntegrityLevel" Return="check"
Directory="TARGETDIR" Impersonate="no" Execute="deferred"
ExeCommand="cmd /c icacls UiProxy.exe /SetIntegrityLevel High & pause" />
The business with cmd /c blah blah & pause is my little trick for getting some visibility on what happened in a failed command line operation.
UiProxy.exe: The system cannot find the file specified.
Successfully processed 0 files; Failed processing 1 files
Press any key to continue . . .
This is a bit of a surprise. UiProxy.exe definitely is in the folder specified by TARGETDIR.
Is there some way to resolve the INSTALLDIR symbol into the command string? I need to produce something like this.
icacls "C:\Program Files (x86)\UiProxy\UiProxy.exe" /SetIntegrityLevel High
Obviously I'll need to put the quotes in explicitly, something like this:
<CustomAction Id="SetIntegrityLevel" Return="check"
Directory="TARGETDIR" Impersonate="no" Execute="deferred"
ExeCommand="icacls "[INSTALLDIR]UiProxy.exe" /SetIntegrityLevel High" />
Where I've put [INSTALLDIR] I need the symbol resolved. The question is how to express this.
Later in code I didn't post because it's standard boilerplate, I noticed this:
<Fragment>
<Directory Id="TARGETDIR" Name="SourceDir">
<Directory Id="ProgramFilesFolder">
<Directory Id="INSTALLFOLDER" Name="UiProxy" />
</Directory>
</Directory>
</Fragment>
Specifying the command line as
<CustomAction Id="SetIntegrityLevel" Return="check"
Directory="TARGETDIR" Impersonate="no" Execute="deferred"
ExeCommand="cmd /c icacls "[INSTALLFOLDER]UiProxy.exe" /SetIntegrityLevel High & pause" />
produces a success message, so it appears the square brackets syntax was correct but TARGETDIR is the wrong symbol, it should be INSTALLFOLDER.
Generally if you want to use the location of an installed file (I assime UIProxy.exe is installed by your installer) you can just use [#UIProxy.exe's File ID] to specify the full path to the component. This is commonly used in Shortcuts.
I tried this and you are right. It's more maintainable so propose it as an answer explaining why it is a better way and I will accept it.
@PeterWone Can you suggest (when) the sequence to execute this CustomAction. I put it after InstallFiles (4000) still it says the program required is not installed.
Not really. I haven't touched WiX since this post.
|
STACK_EXCHANGE
|
I need a horizontal scroll feature that scrolls at a slow speed automatically or can be scrolled manually by choice of user.
The horizontal scroll will allow user to scroll into new background images.
ontop of these images will be 3 column of txt. Admin can add text to first background image, however when first background image gets full of txt, then it txt gets pushed to the next background image, automatically.
Can you do this? What will you build it in? i need to update content on daily basis, and what do you suggest to build in? i been told that cannot be built in wordpress by 3 developers so will need to be built in other framework.
Show me sample site that you built with bid. I need this done with in 24 hours. All you do is create the 3 columns on each background and make site scroll horizontal , allow for when new content gets created then content moves down the line to the next background image.
Please see attached file. I need this done with in 24 hours. tell me what you will build it in, php? codeigniter? bootstrap?
I need to be able to update daily so need a way to update text often and update background images.
Must be able to start working right now and show me updates every 30 minutes.
10 freelance font une offre moyenne de $79 pour ce travail
Hello there! I am a Web Developer having an extensive knowledge in building web apps using asp.net. I am readily available to design 3 columns image-text containers for you. Please feel free to let me know more detail Plus
Hi, I have read and understood your project requirements I am php and codeigniter expert and done the same task in past. I can do this with 24 hours.
Hello, I have read though your job description and understand that you really need a GRAPHIC quality design . I can come up with the design that soothe your preference in no time. I am expert in what I do, Plus
Hi, I have got your requirement, still few things need to ask. the backend activities will handle by you? Like admin side? For the frontend i.e. Scrolling backgrounds and text would be handled. what about backend? H Plus
Note: I have read your requirements that you need some changes in your site. I am a full stack developer with 7 years of experience. I have hands-on experience in: * WordPress * PHP * Shopify * Magento Plus
Dear Hiring Manager, I read your job description that you need scroller in your website and I am confident that I can exceed your expectations. I can start immediately as well as I can do it in PHP. Here is fe Plus
I am working in web developing and web design industry since last 3+ years. I have expert level of front-end development and Also back-end development. I would like to work with clients on a regular basis in a plea Plus
Hi, Thank you for giving me a chance to bid on your project. i am a serious bidder here and i have already worked on a similar project before and can deliver as u have mentioned I have got Rich experience in Jooml Plus
|
OPCFW_CODE
|
[12:55] <rsalveti> morning
[12:56] <suihkulokki> timezones greetings
[13:12] <vin> I am trying to boot ubuntu 10.10 from the igepv2 but it hangs on booting the kernel
[13:18] <ogra> make[4]: Entering directory `/root/unity-2d/obj-arm-linux-gnueabi'
[13:18] <ogra> cd /root/unity-2d/obj-arm-linux-gnueabi/po && /usr/bin/msgmerge --quiet --update --backup=none -s /root/unity-2d/po/fr.po /root/unity-2d/po/unity-2d.pot
[13:18] <ogra> Illegal instruction (core dumped)
[13:18] <ogra> GRRR !
[13:20] <rsalveti> ops
[13:20] <rsalveti> vin: which ubuntu version are you using?
[13:21] <vin> 10.10
[13:22] <rsalveti> vin: maybe you should try to install a newer kernel for igepv2
[13:22] <rsalveti> could be from linaro or our new for maverick, probably should support it better
[13:22] <rsalveti> after you changed the x-loader and u-boot for igepv2
[15:48] * rsalveti lunch
[15:48] <alf_> rsalveti: I can't get the GL drivers to work on sdp/omap4 :/ . eglInitialize() fails with EGL_BAD_ALLOC. Any ideas what is wrong or how to get more info?
[15:48] <alf_> rsalveti: answer after lunch ;)
[15:50] <alf_> rsalveti: that is on natty-alpha-3 with drivers from omap-trunk
[16:58] <rsalveti> alf_: are you using the latest one available?
[16:59] <rsalveti> alf_: I updated them today, but just packaging changes
[16:59] <rsalveti> alf_: is this happening with any application?
[16:59] <rsalveti> and, did you also updated your kernel?
[16:59] <rsalveti> not working yet with 38
[16:59] <rsalveti> working on it
[18:46] <alf_> rsalveti: libegl1-sgx-omap4 1.7~git0f0b25f.2natty3-1
[18:47] <alf_> rsalveti: uname -r 2.6.35-1101-omap4
[18:47] <rsalveti> alf_: hm, this is the one I'm currently using
[18:47] <alf_> rsalveti: and it happens with every app, even the trivial eglinit.c example
[18:47] <rsalveti> alf_: what other packages do you have installed?
[18:47] <rsalveti> I tested with es2gears
[18:48] <alf_> rsalveti: can you please remind me the package for es2gears? :)
[18:48] <rsalveti> alf_: mesa-utils-extra
[18:49] <rsalveti> alf_: can you paste me your package list related with sgx and pvr?
[18:50] <rsalveti> that you have installed at your system
[18:56] <alf_> rsalveti: wait
[18:57] <alf_> rsalveti: All of this has been happening when trying to running through ssh (with DISPLAY etc setup correctly)
[18:58] <alf_> rsalveti: when I run something in a terminal in unity 2d it all runs fine
[18:59] <alf_> rsalveti: wait, wrong again...
[18:59] <rsalveti> alf_: weird, I'm just testing with ssh here
[18:59] <rsalveti> and my es2gears is running fine
[19:00] <alf_> rsalveti: ok, this is what happens: when trying to run things through ssh while stile in gdm or using a plain xinit things fail
[19:00] <alf_> rsalveti: all work fine when logged in unity2d
[19:00] <rsalveti> alf_: does it also work when using the recovery mode?
[19:01] <alf_> rsalveti: hmm, how do I log out unity2d? :)
[19:01] <rsalveti> alf_: no log-out indicator at your version :-)
[19:01] <rsalveti> fixed at latest upload
[19:01] <rsalveti> service gdm restart
[19:01] <rsalveti> :-)
[19:03] <alf_> rsalveti: I think will reboot to validate my speculations :)
[19:03] <rsalveti> alf_: ok :-)
[19:06] <alf_> rsalveti: so rebooted at gdm
[19:06] <rsalveti> let me also try with gdm
[19:07] <alf_> rsalveti: ssh, export DISPLAY, es2gears fails with es2gears
[19:07] <alf_> No protocol specified
[19:07] <alf_> EGLUT: failed to initialize native display
[19:08] <alf_> rsalveti: should I try recovery mode for ubuntu desktop edition or classic desktop (or doesn't matter)
[19:08] <alf_> rsalveti: or recover console?
[19:11] <rsalveti> alf_: sorry, recovery console
[19:11] <rsalveti> it's just the xterm
[19:12] <rsalveti> if it's all black, open metacity first
[19:13] <rsalveti> error while allocating memory could be related with the kernel module
[19:13] <rsalveti> for sgx
[19:14] <alf_> rsalveti: es2gears and glmark2-es2 both work in recovery console through ssh, even before starting metacity to fix the xterm
[19:20] <rsalveti> alf_: hm, will try with X only
[19:21] <rsalveti> alf_: interesting, fails while just loading X
[19:21] <rsalveti> by hand
[19:22] <rsalveti> even with metacity
[19:22] <rsalveti> both es2gears and es2_info fails while trying to initialize gl
[19:29] <rsalveti> alf_: works with root
[19:29] <rsalveti> alf_: probably just permission issues
[19:31] <janimo> rsalveti, how's the webkit crasher? Is it fixed now?
[19:32] <rsalveti> janimo: yup, I'm just writing our latest image to test with it again, before posting at the bug to remove the workaround
[19:32] <rsalveti> janimo: but I tested already, just want to make sure it works with the version that was uploaded
[19:33] <rsalveti> easy to test
[19:34] <janimo> rsalveti, ok . I was reminded now as Kate reassigned the bug to foundations
[19:35] <rsalveti> janimo: oh, ok, should report the result soon
[19:36] <rsalveti> took almost 33 hours to build
[19:36] <rsalveti> that's why I even forgot to test it
[19:57] <alf_> rsalveti: Right, it works as root. Note, however, that there is still a problem as a normal user after having done "xhost +"
[20:42] <pmathews> Anyone looking for a cheap 720x480 color LCD?
[20:42] <pmathews> Check out Bed Bath & Beyond: they are clearancing Sharper Image's Literati e-book reader for $40
[20:43] <pmathews> Haven't started hacking mine up yet, but it has ARM processor running linux inside
[21:34] <mrc3_> heyla! i have a problem while trying to build a package. is this the right place to ask?
[21:35] <GrueMaster> Only if it is arm related. If it is a general use package, you might get a better response on #ubuntu-devel.
[21:36] <mrc3_> GrueMaster, thanks! it's intended for arm, but i guess i better go ask there
[21:37] <GrueMaster> mrc3_: Just understand we can help with the arm bits, but there are far more devs on the other channel. :)
[21:49] <rsalveti> GrueMaster: NCommander: now for bug 727468 we just need to remove the livecd-rootfs workaround
[21:49] <ubot2> Launchpad bug 727468 in webkit "ubiquity-slideshow tears down oem-config on armel" [High,Fix released] https://launchpad.net/bugs/727468
[21:49] <GrueMaster> Cool. I'll let him know when he comes back down to earth.
[21:50] <GrueMaster> (flight lessons).
[21:50] <StevenK> And the skies will never be safe again.
[21:51] <ogra_> LOOOL
[21:51] <lool> hmm?
[21:51] <ogra_> you read my mind, eh ?
[21:51] <ogra_> lool, i explicitly used three O's
[21:51] <ogra_> :)
[21:51] * GrueMaster thinks lool needs to requisition a name change.
|
UBUNTU_IRC
|
I’m facing a strange problem and I really don’t know what’s the next step for me to check to find the solution.
My game is working fine a my IMac (server+client) which is six month old (NVIDIA GeForce GT 120) but is getting blocked on my Windows laptop (client only) that I call the dinosaure with a shared grapic card.
If I need to explain the game in two words. I would say that a player need to take a decision (using GUI elements) and then wait the other player’s decision before choosing a new one. So there is no a lot of information that the server need to receive or send per frame or other info that might be lost in between.
On the Mac, I’m never under 60fps. And between 20 and 30 on Dino… When I said “game blocked” I mean after few rounds (working perfectly at the begining) my poor old laptop is not receiving info from server anymore. I do see them going out of the server but nothing arrive on Dino!
It’s always the laptop which got this bug… Tried like hundred times and never on the Imac. I’m on a local network, so, it can’t be a connection problem. I’m using a lot of DirectGui objects and thinking that it might cause my troubles.
Is their somekind of known traps that eat your frames when you hide, show, resize and modify a lot of DirectGui objects ?
Do you think it’s because the laptop is not able to follow anymore and I need to get a big bro to Dino ?
Here’s the analyse of the scene, as you can see there is not that much :
Maybe it’s because of the GeomVertexArrayDatas and GeomPrimitive that are redundant. I can only guess what does this mean and really don’t know how to avoid it. 30k doesn’t look to be a big deal tough!
No, that has nothing to do with what you describe.
You describe a graphics freeze. If it doesn’t come back, it’s probably due to a driver bug, but it might also be just a lot of CPU utilization or something. It’s true that creating a lot of DirectGui elements can be CPU-intensive, and if your laptop can’t keep up, it could be a few seconds without animation. It should recover eventually, though.
Try running with “load-display tinydisplay” to prove whether it’s a graphics driver bug. If the problem goes away when you run with tinydisplay, it’s a driver bug. If it still exists, it’s something to do with your application or with Panda itself.
I’m not really familiar with how to do that. Do you mean removing “load-display pandadx9” and “load-display pandadx8” keeping only “load-display tinydisplay” ?
If yes, I did it and got the exact same failure as before
The fact that the problem accur randomly (laptop fail at the tenth or fourth hand… or whatever) made me think about something:
A QueuedConnectionReader listen for events each frame. What if the server send the info: "Come on stupid old laptop, it’s your turn to play" at the frame x that is being missed by the laptop ?
Well, I red somewhere on the manual that there is a way to recover the last frame which means that the laptop may switch few of them. So, if the frame he missed was the one the server sent something, the QueuedConnectionReader will then miss it!!!
Is that possible ?
Yes, that is what I meant with tinydisplay. Since you switched it and it still freezes, it means it’s not a problem with your graphics drivers.
So, more likely something in your application is simply locking up. If any task gets stuck in an infinite loop, or doesn’t return for any reason, the whole application will lock up and stop rendering.
Network messages are not lost just because you’re not listening at the moment they’re sent. They will be received the next frame instead. Still, you might be on to something; it might be freezing because something in the networking is getting confused; maybe it’s waiting forever for a message to come in or something. (If you send bogus data to a client that’s expecting formatted datagrams, it get can caught in an infinite loop trying to read the next “datagram” which is just garbage. That’s just one of many, many possibilities, though.)
You’ll have to debug this yourself. One approach is to put in lots of print statements all over and try to figure out the last thing it was doing before it got locked up.
Here’s all the debug I did until I found something new, skip the quote if not interested:
I’ve got something new
In order to know if my laptop still have a live connection with the server, I added a pemanent DirectButton which sends a basic test-string to the server.
When the freeze appear --> push button --> server print the test-string and additionnaly to that, the freeze disappear : The laptop turn was restored and able to play. Conclusion: When laptop freeze, network connection is still up AND sending a network message from laptop to server removes the freeze.
You were right!
I already checked that all messages sent from the server equals the exact same messages received by laptop (datagram are supposed to be empty after being red).
Also, I’m using basic addUint8() and addString(), no huge data or even classes, just strings and smallInt via PyDatagram. When they are received, I just use getUint8() and getString() in temp variables that I thread afterwards. No more, no less!
Here’s an exemple of the longest def for sending from host to server:
Is there some special manipulation you need to perform on a PyDatagramIterator after you finished reading the info it contains ? (as txt file you need to close after reading)
What kind of other test can I perform on the PyDatagram and PyDatagramIterator to determin the freeze’s cause?
I finally found the answer to that problem
While I was downloading a tool on that old laptop, I remarked that the download load-status bar was frozen in firefox. I had to manually pause it and resume it to continue the download.
So, all the chekcs/ searches I did was completly useless. It was just an old virus very well hidden on the laptop that freeze data exchanges. Any video on youtube get frozen after a minute or so… I thought about many possibilities but not that one!
Anyway, long live to Dyno with his brand new re-installation.
|
OPCFW_CODE
|
Spoilers! I’m talking about one of the big twists in Vernor Vinge’s True Names. If you want to read it first - and you should, it’s fantastic and only just under 50 pages - a full text is available online.
The Mailman is the main antagonist is True Names, a computer thriller set in a future where people explore computer networks and the internet through a fantastic sort of shared VR. The details are extensive so I won’t cover them all, but the main gist is that the Mailman is a super-powerful unknown entity in the world. Ery and Mr. Slippery, the two protagonists, are expert level hackers that end up working with the big, bad government to catch their shared enemy.
Despite the real-time, super detailed VR world, the Mailman only ever communicates through a very old-fashioned typing machine, with very large delays in it’s responses. Combined with his obvious computing power and resource capacity, this leads Ery and Mr. Slippery to believe the Mailman has a built-in large lag time, possibly off-planet, possibly somewhere else in the solar system. They start thinking alien invasion.
The truth ends up even more interesting. The Mailman ends up being an incredibly good network protection routine with a built-in personality simulator. Over years, a leftover copy of the Mailman program gobbled up more and more computing resources and grew to the point that it had enough computing to simulate consciousness and self-awareness. It took hours or days of compute time to simulate even a short amount of self-awareness which is what caused all the communication delay times.
“Wait. Are you trying to tell me that the Mailman was just another simulator? That the time lag was just to obscure the fact that he was a simulator? That’s ridiculous. You know his
powers were more than human, almost as great as ours became.”
“But do you think you could ever be fooled?”
“Frankly, no. If you talk to one of those things long enough, they display a repetitiveness, an inflexibility that’s a giveaway. I don’t know; maybe someday there’ll be programs that can pass the Turing test. But whatever it is that makes a person a person is terribly complicated. Simulation is the wrong way to get at it, because being a person is more than symptoms. A program that was a person would use enormous data bases, and if the processors running it were the sort we have now, you certainly couldn’t expect real-time interaction with the outside world.” And Pollack suddenly had a glimmer of what she was thinking.
“That’s the critical point. Slip: ‘if you want real-time interaction’. But the Mailman-the sentient, conversational part-never did operate real time. We thought the lag was a
communications delay that showed the operator was off planet, but really he was here all the time. It just took him hours of processing time to sustain seconds of self-awareness.”
I get this feeling. Extroverts feed on interaction in real-time; they get energy from it. But we introverts get everything sapped out of us. The longer we have to sustain the ‘real-time conversational’ part of our consciousness, the lower our energy gets. A lot of the time, we become way less real-time. Instead of answering with the conversational flow, we get lots of lag because we’ve completely phased out.
Some people get really annoyed by this, but it’s just how some people are built. It literally takes me hours of downtime - quiet, alone, thinking time - to have the energy for all the real-time interactions I need to have every day.
And this is why introverts should write. Much like the Mailman’s slow and delayed feedback cycles, writing is a way to build interaction with the world while also building introvert’s energy. It’s done during quiet, alone, thinking time and still manifests as something that’s conscious and interactive.
So to any of my friends reading this: write more. This is just yet another attempt on my part to convince you all to write what you think about. I want to read it.
And if nothing else, go read True Names. It’s a seriously good story.
|
OPCFW_CODE
|
//
// PetApiProtocol.swift
// PetFeed
//
// Created by Danko, Radoslav on 03/09/2020.
// Copyright © 2020 Danko, Radoslav. All rights reserved.
//
import Foundation
import Combine
import CoreData
/// Data Repository interfaces
protocol LocalPetRepository {
func fetchFavourites(page: Int) -> AnyPublisher<[DisplayablePet], PetFailure>
func setPet(_ pet: Pet, image: Data?, favourite: Bool) -> AnyPublisher<Pet, PetFailure>
func selectPet(_ pet: Pet) -> AnyPublisher<Pet, PetFailure>
func fetchFavouritesIds() -> [String]
}
/// Pet API
struct LocalPetApi: LocalPetRepository {
private let managedObjectContext: NSManagedObjectContext
init(managedObjectContext: NSManagedObjectContext) {
self.managedObjectContext = managedObjectContext
}
/// Helper function to fetch Favourite Pet ids
/// - Returns: Favorite Pet Ids
func fetchFavouritesIds() -> [String] {
let fetchRequest =
NSFetchRequest<NSDictionary>(entityName: "FavouritePet")
fetchRequest.resultType = .dictionaryResultType
fetchRequest.propertiesToFetch = ["url"]
do {
let petsNSDict = try managedObjectContext.fetch(fetchRequest)
return petsNSDict
.map { $0.allValues}
.flatMap{$0}
.compactMap{ String(describing: $0)}
} catch let error as NSError {
Log.data().error(message: "Could not fetch. \(error), \(error.userInfo)")
return []
}
}
/// Select Pet - if it is stored in Faviourites Pets then change its flag accordingly
/// - Parameter pet: Pet to select
/// - Returns: Selected Pet
func selectPet(_ pet: Pet) -> AnyPublisher<Pet, PetFailure> {
let fetchRequest =
NSFetchRequest<NSFetchRequestResult>(entityName: "FavouritePet")
fetchRequest.predicate = NSPredicate(format: "url = %@", pet.url)
do {
if let petsMO = try managedObjectContext.fetch(fetchRequest) as? [FavouritePet] {
if let favPet = petsMO.first {
return .future(Pet(favPet.url ?? "", isFavourite: true))
}
}
return .future(Pet(pet.url, isFavourite: false))
} catch let error as NSError {
Log.data().error(message: "Could not fetch. \(error), \(error.userInfo)")
return .fail(.databaseError(error: error))
}
}
/// Fetch stored - favourite - Pet Images
/// - Parameter page: Paging
/// - Returns: Favourite Publisher
func fetchFavourites(page: Int = -1) -> AnyPublisher<[DisplayablePet], PetFailure> {
let fetchRequest =
NSFetchRequest<NSManagedObject>(entityName: "FavouritePet")
do {
if let petsMO = try managedObjectContext.fetch(fetchRequest) as? [FavouritePet] {
let pets = petsMO.compactMap { (pet) -> DisplayablePet? in
let id = pet.url ?? ""
if let imageData = pet.value(forKeyPath: "image") as? Data {
if let image = imageData.asImage() {
return DisplayablePet(id: id, image: image)
}
}
return nil
}
return .future(pets)
}
return .future([])
} catch let error as NSError {
Log.data().error(message: "Could not fetch. \(error), \(error.userInfo)")
return .fail(.databaseError(error: error))
}
}
/// Update the Pet
/// - Parameters:
/// - pet: Pet to update
/// - image: Optional Image to save
/// - favourite: Is Favourite
/// - Returns: Updated Pet
func setPet(_ pet: Pet,
image: Data? = nil,
favourite: Bool) -> AnyPublisher<Pet, PetFailure> {
let modifiedPet = Pet(pet.url, isFavourite: favourite)
if favourite {
if let entity = NSEntityDescription.entity(forEntityName: "FavouritePet",
in: managedObjectContext) {
if let favPet = NSManagedObject(entity: entity,
insertInto: managedObjectContext) as? FavouritePet {
favPet.url = pet.url
favPet.image = image
}
}
} else {
let fetchRequest =
NSFetchRequest<NSFetchRequestResult>(entityName: "FavouritePet")
fetchRequest.predicate = NSPredicate(format: "url = %@", pet.url)
let deleteRequest = NSBatchDeleteRequest(fetchRequest: fetchRequest)
do {
try managedObjectContext.execute(deleteRequest)
} catch let error as NSError {
Log.data().error(message: "Could not fetch. \(error), \(error.userInfo)")
return .fail(.databaseError(error: error))
}
}
do {
try managedObjectContext.save()
} catch let error as NSError {
Log.data().error(message: "Could not save. \(error), \(error.userInfo)")
return .fail(.databaseError(error: error))
}
return .future(modifiedPet)
}
}
|
STACK_EDU
|
Cryptocurrencies are disassociating themselves from their P2P switch techniques. Golem is a decentralized P2P market for repurposing unused computational area with a provide and demand dynamic.
Amazon Net Providers supply the same answer to client’s weak factors, however as a centralized entity. Golem’s construction had turn out to be a staple for leveraging Ethereum’s crowdfunding platform earlier than ERC-20 grew to become a blockchain commonplace.
A number of decentralized entities mimic Golem’s pay-per-use enterprise mannequin. Nonetheless, the market’s potential is extremely aggressive with initiatives repeatedly figuring out a large number of companies that would assist help the mixing of Net 3.0 into our every day digital interactions.
Golem migrated from their native ecosystem to an ERC-20 compliant token. Golem Community’s CEO emphasised that the corporate tackles “large tech monopolies with fairer techniques,” thus increasing their community structure. The brand new community structure resulted within the creation of a brand new token.
In November 2020, GNT token holders had the choice to migrate to the brand new community and commerce their token on a 1:1 pairing for the Ethereum native token, GLM. Constructing on Ethereum’s layer 2, it widened the scope for brand new community alternatives, primarily because of its scaling capabilities.
Golem additional grew to become a part of MIT Resolve, extending the community’s attain in the direction of early tech adopters. As acknowledged of their press launch, they see delivering censorship-free entry to computational sources as crucial to “embracing the Net 3.0” applied sciences. Maria Paula, head advisor at Golem, identified that the markets, and society on the whole, are on the precipice of “world-changing” occasions, which thus improve the demand for disruptive applied sciences.
The title of the “Airbnb for computer systems” will not be an overstatement. Increased demand for added sources will increase the workload on Golem’s finish service. Builders have actively dedicated to enhancing the steadiness and security of their blockchain finish product.
As such, new iterations and code amendments are pushed by way of the Beta phases 1 and a couple of on their mainnet. As Golem gathers extra institutional, in addition to consumer, consideration, easy operation on the community is a should for each seekers and suppliers. What’s extra, Golem is actively selling group bounties to extend dApp testing on their API.
Golem’s declare to credibility as a decentralized market is mirrored of their transparency. They launched API stats and documentation that allow the group to extend worth prediction methods.
Golem can be querying group impressions on whether or not to take part in AAVE or MakerDAO for collateral. Moreover, they incentivize Neighborhood Contribution by that includes Golem’s personal blockchain. The Golem Fleet Battle Simulator demonstrates the challenge’s potential to combine a number of use instances and use the ensuing computational energy to course of PvP outcomes. Lastly, Golem will increase group participation on Discord by way of a reward system by way of which members can obtain 1,500 GLM tokens bi-monthly.
|
OPCFW_CODE
|
This is more of an essay than a blog post, but this subject comes up time and again, and since I tripped across this interesting blog post by Pedro Timóteo about why he has decided not to be a sysadmin any more, I thought now’s as good a time as any to comment on what I think is a significant industry trend in production engineering work.
Pedro here has discovered the essential truth of most sysadmin or DBA jobs: if you’re any good, you will soon be bored and under-appreciated. (That’s a different blog post of his, also worth reading, linked there.)
That’s because the best way to be excited and appreciated in a production engineering role is to let things fail, and then come in riding your white horse to fix things up when everyone notices. There’s only one problem — this is also distasteful to those among us who believe in doing a good job by ensuring things never fail.
Another good way to stay busy is to do everything manually all the time — clone databases, create users, etc. This is also distasteful to those of us who have seen and put to work the huge savings available by automating routine production engineering tasks such as daily verifications.
So what do we do instead? We tune, automate, streamline — in Pedro’s words “some software upgrades here, some tuning there, some cron entries here, some scripting there, some changes to the network, and so on.” Next thing you know, “most of your job is done.”, you’re not so busy any more and there’s plenty of time, which your employers are happy to fill with
dumb, repetitive, non-sysadmin (and therefore non-scriptable) tasks — which, since you have free time, you probably can’t refuse, or at least feel you can’t. Any raise or promotion will certainly not go to you, but to your “hard-working” co-workers, who are always so “busy” and have so much “work” that they stay at work every day after 6, that they can never do a task “right now”, but only in a week’s time, and that, even their own results are much inferior to yours, it’s you who’re not “dependable”, “dedicated”, or “competent”.
This is the quintessential problem of the best production engineers. It is the impetus bringing the finest of them to come work at Pythian, and driving a large part of our growth.
It’s also a huge piece of the dynamic for most of our competitors’ profit models as MSPs, and the primary differentiator of Pythian vis-a-vis the lot. And finally, it is the underlying factor in why we believe here at Pythian that we are the vanguard of a broad IT shift away from using solely in-house resources for the production engineering functions of database and systems administration. Let me explain.
Our service model allows our customers to subscribe to our services based on a “co-op” model, essentially a retainership expressed in hours per month. We have customers subscribing to the tune of 16 hours per month, our minimum, up to 400 hours per month, our largest routine contract, and indeed up to 700 hours per month, our largest month for a single customer. And everywhere in between; we have tons of customers in the 80 to 250 hours per month range (70 active customers in total as I write). Customers can change their allocation with 30 days’ notice at any time.
We do all the automation, tuning, and streamlining work expected of any production engineer, but there’s a catch — when we automate or tune something and our workload decreases, we don’t need to get bored or have our customers fill up our days with make-work projects. Our customers can literally downsize their contracts in step with our success at tackling the workload. Some major success stories have us handling shops that used to have a full-time DBA with as few as 40 hours per month, one year in.
In the typical MSP model, where companies charge a flat monthly rate in exchange for a checklist of items being done, the fact that good engineers can quickly and dramatically reduce the amount of work that needs doing is in essence their profit model. They’re just like us in that their major cost is their personnel: they are essentially a human services company. However, since they negotiate the monthly rate up-front for a set list of service items, whenever they automate, tune, or streamline, they keep the benefits for themselves, charging the same rate indefinitely or for the term of the contract.
Even worse, when their customers invest in faster hardware or more storage, eliminating the need for tuning and for some configuration and storage management work, they still keep the benefits for themselves. And when the RDBMS or OS vendor releases a major upgrade that further eliminates “busy work”, which has been a major feature of every such release (remember how we used to need to “defragment tables” and how long that used to take?) they still keep those benefits for themselves, even though it’s their customers paying the licensing maintenance costs for those upgrades.
At Pythian, all of those savings are given back to the customer and all we get is our modest mark-up on our rate. For the typical MSP, the fact that it becomes obvious how little the vendor is spending to deliver on the checklist causes huge friction between the service provider and the customer. Meanwhile, at Pythian, since the customer gets to keep those savings for themselves, we get to retain the customers. For instance, we have two active customers that have been active customers continuously since 1999 — 100 monthly renewals and counting! And eight customers that have been customers since 2001 or longer. This is because our model allows them to make working with us a long-term strategy.
So why do I say I believe we’re at the beginning of a broad IT shift away from using solely in-house1 resources for production engineering work?
- The best DBAs would rather work at Pythian. Much less boredom. Working directly in their field in a company specialized in their area of expertise. Serving multiple businesses and multiple platforms simultaneously. Working with the best and brightest in the field. Being able to blog and prepare presentation abstracts, attend and present at conferences, all on company time. The best DBAs would also rather work at Pythian than at a typical flat-rate MSP provider, for the simple reason that our model allows fun and challenging work to be routed to us, whereas our competitors’ DBAs are stuck delivering the same checklist and that’s it — who would want to do that!?
- We’re saving our customers money while improving the quality of service to their production operations.
These two factors point in the same direction and reinforce each other. The first means that Pythian is a great place to find the leading technical resources (witness our participation in IOUG Collaborate 07 in April and in the MySQL Conference and Expo two weeks ago). The second factor, coupled with efficient markets, means that companies will be taking advantage of the cost and quality advantages of outsourcing some or all of their DBA and sysadmin operations to Pythian or companies much like it.
So you know quite a bit more about Pythian and how our service works now. If you think Pythian can make a contribution to your IT operations, either by outsourcing a DBA or SA opening, or by blending us in, please contact us and let us explain how we might be of service to you.
1. I say “solely in-house” here because one of the successful ways Pythian is put to work by our customers is by blending us in with their in-house resources to tap the synergies while retaining the advantages of the in-house personnel within a larger team. A good example of this is at the University of Pennsylvania, where they have four in-house DBAs blended seamlessly with Pythian.
[…] a marathon runner to a casual jogger. The reason is explained very well in Paul’s blog What is Behind Pythian’s Growth and Market Success?. The result is that I’m always engaged in challenging activities, always looking for better […]
[…] i minusy? > Chciaabym zasiêgn±æ opinii zanim siê w co wpakujê ;-) > Pozdrawiam, > ae https://www.pythian.com/blogs/458/what-is-behind-pythians-growth-and-market-success — Jakub Wartak […]
[…] years ago I’ve read an interesting post by Paul Vallee. He was explaining the business case behind Pythian and other IT outsourcing […]
|
OPCFW_CODE
|
In case you weren't already a fan of the 1986 Transformer movie, Unicron was a giant, planet sized robot, also known as the God of Chaos.
For me this analogy is too obvious. DAG schedulers like Airflow (cron's), often become bloated fragile monoliths (uni-cron's). And just like this planet eating monster, they bring to all sorts of chaos in for the engineers that maintain and operate them.
There have been quite a few great articles written on the subject of breaking up the Airflow monorepo, and to provide context I'll cover these quickly. However, this approach alone does not defeat Unicron. In this world of increasingly decentralized data development we need to seriously think about using just a single scheduler.
Airflow to often reaches limits of project dependencies, multi-team collaboration, scalability, and just overall complexity. It's not Airflows fault, it's the way we use it. Luckily there are a couple great approaches to solving these issues:
1) Use multiple project repos - Airflow will deploy any dag you put in front of it. So you can with a little bit of effort build a deployment pipeline which can deploy dag pipelines from separate project specific repos into a single Airflow. There are a few techniques here ranging from DAGFactory (good article here), leveraging Git sub-modules, to just programmatically moving files around in your pipeline.
2) Containerize your code - reduce the complexity of your Airflow project by packaging your code in separate containerized repositories. Then use the pod operator to execute these processes. Ideally Airflow becomes a pure orchestrator, with very simple product dependencies.
Using both of these techniques, especially in combination will help make your Unicron less formidable, perhaps only moon sized. In fact, in many organizations, this approach coupled with a managed Airflow environment such as AWS Managed Workflows is a really great sweet spot.
As organizations grow, and data responsibilities become more federated, we need to ask ourselves an important question - do we really need a single scheduler?. I would wholeheartedly say no, in fact it becomes a liability.
The most obvious problem - single point of failure. Having a single scheduler, even with resiliency measures, is dangerous. An infrastructure failure, or even a bad deployment could cause an outage for all teams. In modern architecture we avoid SPF's if at all possible so why create one if we don't need to?
Another issue is the excessive multi-team collaboration on a single project. Possible, especially if we mitigate with the techniques above but not ideal. You might still run into dependency issues, and of course Git conflicts.
And then the most obvious question - what is the benefit? In my experience the majority of DAG's in organization are self contained. In other words they are not using cross DAG dependencies via External Task Sensors. And if they are, there is a good chance the upstream data product is owned and maintained by another team. So other than observing whether it is done or not, there is little utility to being in the same environment.
My recommendation is to have multiple Airflow environments, either at the team or application level.
My secret sauce (well one way to accomplish this) - implement a lightweight messaging layer to communicate dependencies between the multiple Airflow environments. The implementation details can vary - but here is a quick and simple approach:
- At the end of each DAG publish to an SNS topic.
- Dependent DAGS subcribe via SQS.
- The first step in the dependent DAG would then be a simple poller function (similar to an External Task Sensor), that would simply iterate and sleep until a message is received.
Obviously the implementation details are maleable and SQS could be substituted with Dynamo, Redis, or any other resilient way to notify and exchange information.
You could even have your poller run against the API of other Airflow instances. Although it will possibly couple you to another projects implementation details (i.e. specific Airflow infrastructure and DAG vs data product). Perhaps that other team might change the DAG that builds a specific product, or replace Airflow with Prefect or maybe move to Step Functions. In general we want to design in a way that components can evolve independently, i.e. loose coupling.
One of my very first implementations of this concept was a simple client library named Batchy, backed by Redis and later Dynamo. I created this long before Data Mesh was a thing, but was guided by the same pain points highlighted above. This simple system has been in place for years integrating multiple scheduler instances (primarily Rundeck) with little complaint and great benefit.
|
OPCFW_CODE
|
11 September, 2017
A few words about Rust
Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety. The project was established with the aim of creating a safe, concurrent, practical systems language while providing efficient code and a comfortable level of abstraction (FAQ). The 1.0 release of Rust was launched on 15 May, 2015, making it a young language that is still rapidly developing. With an engaged and helpful community, terrific documentation and elegant package registry (crates.io), Rust is a pleasure to learn and work with.
Why did I write this tutorial?
I’ve been learning and coding in Python for the past 10 months, primarily working on backend systems and RESTful APIs. After watching Samuel Cormier-Iijima’s excellent talk from PyCon Canada 2016, Extending Python with Rust, I felt inspired to begin learning (thanks Sam!). Writing Python libraries in Rust seemed like an easy-win and a useful technique for future projects. Thanks to the well-written Rust Book and helpful community members on the #rust-beginner irc channel, I had my first library working in two days. Buoyed by my early success, I decided to put together this simple tutorial in the hopes of helping others and reinforcing my understanding.
Why write Python libraries in Rust?
While Python is undoubtedly the Swiss-army knife of programming languages, there are situations in which a faster, lower-level and more lightweight toolset is required. For example, computationally-intensive and time-critical applications in industrial communications systems using embedded hardware. In such cases, an optimal solution can be crafted by building the backbone of the system in Python and relying on Rust extensions where greater low-level control and performance are required. I recommend reading Rust for Python Programmers by Armin Ronacher if you’re looking for greater detail on this subject.
1. Install Rust
If you’re running a *nix distribution, run the following command in your terminal:
If you’re running Windows, download and run the following executable:
Further information can be found on the official Rust installation page.
2. Create a working directory
Navigate to your preferred directory and create a new Rust project. I have a projects directory on my home partition, with a rust directory inside of that. Running the following command will create a directory according to the project name you specify and populate it with a number of files:
3. Set configuration in Cargo.toml
Move into the newly created directory and open the Cargo.toml file with your preferred text editor:
You’ll see the package information at the top of the file, including name, version and authors (that’s you). Correct the authors information if required. This information is important since it’ll be included with your package if you choose to publish it to crates.io. You will also notice an empty dependencies section.
Next, we’re going to define the name and crate-type of our library. The name will be used to import the library in Python. Add the following after the package section:
Then we need to define the project dependencies. We’re going to be using the cpython bindings to integrate our Rust code with Python.
The above dependency declaration pulls the latest stable release of cpython. For the very latest release, declare as follows:
Here is the complete Cargo.toml code for our project:
Save the updated configuration and exit.
4. Open src/lib.rs and begin editing
Next, we’re going to open src/lib.rs and begin editing; this is where the actual code of our library lives. When you first open the file you’ll see the following code:
Let’s delete that code so we have a blank file and then enter the following:
In the above code segment, the first line has to do with macros - a metaprogramming facility in Rust - and communicates to the compiler that we wish to use all the macros defined in the crate(s) listed below. I encourage you to investigate further in the Macros section of the first edition of the Rust Book.
The second line of code is a declaration linking the cpython crate to our new library. The contents of the crate are downloaded at compile-time and incorporated into our library. Further details here.
The final line of the above code segment defines the types we’re drawing from cpython.
Next, we’re going to write our function. For this example, the function will receive a string and then return a string based on a simple pattern-match:
The function has two parameters, the first is a Python object (allowing us to interface with the Python interpreter) and the second is a String that will be passed into our Rust code from Python. PyResult is an object that allows us to return exceptions to Python. This line of code is known as a signature in Rust. We then use match, a nifty feature of Rust, to run a check on the string (val) that has been passed into the function. If val is “online”, we return “green”, for all other cases (note the underscore) we return “red”. The code within Ok() defines the return value of our function, and to_string() is a trait used to convert a given value to a String. That’s all there is to the function!
After writing our function code we need to integrate Rust with the Python interpreter:
py_module_initializer is a macro defined by the cpython crate we imported at the top of our library. The first parameter (‘status’) is the name of our module, the second parameter is the Python2 naming for our module, while the third parameter is for Python3. The last segment of code on the first line (py, m) allows the modification of received module objects.
We then add a docstring and use the py_fn! macro to build the Python version of our function. That’s all there is to it! Save and exit.
Here’s the complete code for our module:
6. Compile the code
You should be in the root directory of your Rust module (in my case: /home/projects/rust/python_lib_tut). Run the following command to compile the code:
Note: The default behaviour of cpython is to use whichever Python3.x interpreter is set in PATH at compile time.
7. Copy the library
If everything went smoothly, you should see a new directory named ‘target’ in the project root. The library itself can be found in target/release and is named ‘libstatus.so’. Let’s copy the library and then fire up the Python interpreter to test our function:
8. Open Python interpreter and test
Our brand new module in action
That’s all there is to it!
I recommend looking through the rust-cpython repo for additional docs and info. It took me a while to figure out how to compile the library for Python2.7. Here’s how:
Open the Cargo.toml file in the root of your Rust module and edit the dependencies section as follows:
I really hope you found this tutorial helpful in some way! If you notice any errors or inaccuracies, please drop me a message via Twitter or email (firstname.lastname@example.org). I’m already looking forward to my next Rust project!
|
OPCFW_CODE
|
Profiling PHP with XDebug
(This post is a fork of a draft version of a tutorial / guide originally written as an internal document whilst at my internship.)
Since I've been looking into xdebug's profiling function recently, I've just been tasked with writing up a guide on how to set it up and use it, from start to finish - and I thought I'd share it here.
While I've written about xdebug before in my An easier way to debug PHP post, I didn't end up covering the profiling function - I had difficulty getting it to work properly. I've managed to get it working now - this post documents how I did it. While this is written for a standard Debian server, the instructions can easily be applied to other servers.
For the uninitiated, xdebug is an extension to PHP that aids in the debugging of PHP code. It consists of 2 parts: The php extension on the server, and a client built into your editor. With these 2 parts, you can create breakpoints, step through code and more - though these functions are not the focus of this post.
To start off, you need to install xdebug. SSH into your web server with a sudo-capable account (or just use root, though that's bad practice!), and run the following command:
sudo apt install php-debug
Windows users will need to download it from here and put it in their PHP extension direction. Users of other linux distributions and windows may need to enable xdebug in their php.ini file manually (windows users will need
extension=xdebug.dll; linux systems use
Once done, xdebug should be loaded and working correctly. You can verify this by looking the php information page. To see this page, put the following in a php file and request it in your browser:
<?php phpinfo(); ?>
If it's been enabled correctly, you should see something like this somewhere on the resulting page:
With xdebug setup, we can now begin configuring it. Xdebug gets configured in
php.ini, PHP's main configuration file. Under Virtualmin each user has their own
php.ini because PHP is loaded via CGI, and it's usually located at
~/etc/php.ini. To find it on your system, check the php information page as described above - there should be a row with the name "Loaded Configuration File":
Once you've located your php.ini file, open it in your favourite editor (or type
sensible-editor php.ini if you want to edit over SSH), and put something like this at the bottom:
[xdebug] xdebug.remote_enable=1 xdebug.remote_connect_back=1 xdebug.remote_port=9000 xdebug.remote_handler=dbgp xdebug.remote_mode=req xdebug.remote_autostart=true xdebug.profiler_enable=false xdebug.profiler_enable_trigger=true xdebug.profiler_enable_trigger_value=ZaoEtlWj50cWbBOCcbtlba04Fj xdebug.profiler_output_dir=/tmp xdebug.profiler_output_name=php.profile.%p-%u
Obviously, you'll want to customise the above. The
xdebug.profiler_enable_trigger_value directive defines a secret key we'll use later to turn profiling on. If nothing else, make sure you change this! Profiling slows everything down a lot, and could easily bring your whole server down if this secret key falls into the wrong hands (that said, simply having xdebug loaded in the first place slows things down too, even if you're not using it - so you may want to set up a separate server for development work that has xdebug installed if you haven't already). If you're not sure on what to set it to, here's a bit of bash I used to generate my random password:
dd if=/dev/urandom bs=8 count=4 status=none | base64 | tr -d '=' | tr '+/' '-_'
xdebug.profiler_output_dir lets you change the folder that xdebug saves the profiling output files to - make sure that the folder you specify here is writable by the user that PHP is executing as.
If you've got a lot of profiling to do, you may want to consider changing the output filename, since xdebug uses a rather unhelpful filename by default. The property you want to change here is
xdebug.profiler_output_name - and it supports a number of special
% substitutions, which are documented here. I can recommend something
phpprofile.%t-%u.%p-%H.%R.%cachegrind - it includes a timestamp and the request uri for identification purposes, while still sorting chronologically. Remember that xdebug will overwrite the output file if you don't include something that differentiates it from request to request!
With the configuration done, we can now move on to actually profiling something :D This is actually quite simple. Simply add the
XDEBUG_PROFILE GET (or POST!) parameter to the url that you want to test in your browser. Here are some examples:
Adding this parameter to a request will cause xdebug to profile that request, and spit out a cachegrind file according to the settings we configured above. This file can then be analysed in your favourite editor, or, if it doesn't have support, an external program like qcachegrind (Windows) or kcachegrind (Everyone else).
If you need to profile just a single AJAX request or similar, most browsers' developer tools let you copy a request as a
wget command (Chromium-based browsers, Firefox - has an 'edit and resend' option), allowing you to resend the request with the
XDEBUG_PROFILE GET parameter.
If you need to profile everything - including all subrequests (only those that pass through PHP, of course) - then you can set the
XDEBUG_PROFILE parameter as a cookie instead, and it will cause profiling to be enabled for everything on the domain you set it on. Here's a bookmarklet that set the cookie:
insert_secret_key_here with the secret key you created for the
xdebug.profiler_enable_trigger_value property in your
php.ini file above, create a new bookmark in your browser, paste it in (making sure that your browser doesn't auto-remove the
Was this helpful? Got any questions? Let me know in the comments below!
Sources and further reading
- PHPStorm-specific documentation, guides, and tutorials:
- Webgrind - an easy-to-setup web-based gui for xdebug profiling analyser by Joakim Nygård
- kcachegrind - The original (as far as I know) profile analysis tool. Should be available in most distributions' default repositories under something like
- XDebug Snippets by jtdp
|
OPCFW_CODE
|
Like I did with Windows Vista almost three years ago (btw, i can't believe how quickly the time has gone), in the past few days I have been trialling out the Release Candidate of the upcoming Windows 7.
For those not in the know, a 'Release Candidate' is basically just that: more advanced in the development lifecycle than a beta, and ideally it is a candidate for the final release, pending any bugs or features that have yet to be fixed or included.
Like last time I heard 'Enough talking, show us some screenshots!', so like last time below are a few selected thumbnails, you can see the full screenshots here (as with the Vista Beta a few years ago, because it's running in a virtual machine, the interface of the OS is without a lot of the fancy enhancements, e.g transparency etc.).
My views on the RC: it's a very polished version of the OS, it looks like it's nearly feature-complete. The biggest positive for me over Vista is the UAC implementation. In installing and customising the it, i didn't get one annoying Vista-style UAC prompt, which is really great (providing that the UAC-implementation is actually protecting the operating system in the background).
A lot of other Vista-quirks have been rectified too: the most obvious for me was the inclusion of 'Shut Down' by default on the start menu, rather than the 'sleep' in Vista. Another big winner for me is the 'jump lists' in right-clicking on a taskbar item, i can see that one being very useful. The OS is also very lean: a lot of the normal included MS programs (eg Movie Maker, Photo Gallery) aren't included and you'll have to download them separately for free if you want them.
The main annoyance for me is the new taskbar design and implementation. I've got to say that I really don't like it, and as you can probably tell by the screenshots i've basically customised it back to the way I like it, especially regarding the quick-launch stuff. The tray area (bottom right) is also very drab and devoid of colour. I don't mind them trying new stuff to make it more user friendly as long as it lets me change it to what I find logical :).
Other than that, Windows 7 seems very much like "Vista: take 2", and it seems like a logical progression for Microsoft to get rid of the bad publicity that Vista's gotten and encourage people to move on from XP.
Anywho, give it a try if you want, it's freely available from Microsoft for the time being (the RC installations will expire in June 2010, which is a very generous amount of time).
|
OPCFW_CODE
|
Mashups using the same API (3222)
The National NEMO Network's Google Maps mashup of low impact development (LID) practices around the United States.
The damage radii of each quake is displayed around each epicenter. Shows power and reach of quakes as they happen. US, European, and Australian feeds improve global coverage.
Show in real time ultralight aircraft position on google maps, you can also see the cockpit. You can show your plane too.
Related Articles (1328)
DeepEarth, a new map control that integrates Microsoft's Virtual Earth mapping service (our Virtual Earth API Profile) with the Silverlight 2.0 framework, is now available as an open source project on CodePlex.
With so many people glued to their smart phones these days, text message marketing simply makes sense. Done right, it’s easy, personable and gets the word out instantly. Slick Text is on-board with the idea. The mobile marketing company has just released the Slick Text API that enables third-party developers to integrate its text message service into their own products.
Word of mouth is a powerful marketing tool, and businesses (not to mention the consumers), can benefit greatly from helpful reviews. Popyoular is an editorial review-based recommendation and discovery platform. It's aimed specifically at film, music, books and games. The idea is to connect a website's good content with trusted reviews and opinions about that content, helping to keep users engaged and directing them to things they may otherwise have overlooked. Popyoular's API makes it possible for developers to integrate this functionality into any other website.
RELATED APIs (2560)
Cryptocurrency Alternative Data
||Cryptocurrency Alternative Data API provides market data about more than 300 types of cryptocurrency such as coin lists, reddit discussions, tweets, GitHub data, search engine keyword scores and...||Cryptocurrency||06.13.2019|
||The MoonCalc API provides access to mooncalc.org to determine the course of the moon, moonrise, moon angle, full moon and lunar eclipse for any location and time. It allows you to integrate the...||Mapping||06.11.2019|
||The SunCalc API provides access to suncalc.org to determine the course of the sun, sunrise, sun angle, shadow length, solar eclipse for any location and time. It allows you to integrate the...||Mapping||06.11.2019|
||The NYC Geoclient API allows developers to integrate with the NYC Department of City Planning’s Geosupport system. Geosupport is a mainframe-based geocoding system that provides coordinate and...||Government||06.10.2019|
||The Tollguru Toll API provides access to service to calculate tolls and gas costs across all toll roads, tunnels, bridges, turnpikes and tollways in the USA, Canada, Mexico and India. This includes...||Mapping||05.29.2019|
|
OPCFW_CODE
|
I am trying to fix this problem with my wife's laptop and its really frustrating. Its Dell Inspiron N5030 and its just over a year old. Unfortunately the 1 yr warrantee has expired and this problem started happening for the past 2 -3 months. The problem is, when you start the laptop, it has a complete blank White screen. I have read quite a number posts around and many have suggested that this is usually a loose video cable connection, or a bad lcd video cable or a problem with motherboard. I will explain the steps I've done so far and hopefully someone here can guide me on what I can do next to resolve this issue.
First of all, when this problem started happening, it was very unpredictable. Meaning, sometimes the laptop starts fine but sometimes the white screen appears. But as the weeks went by, the white screen begins to appears more frequently and now it appears all the time. This is what i've found so far:
1) When connecting an external monitor, the external monitor works fine. The weird thing is, sometimes when I connect the external monitor, the laptop white screen disappears and the laptop lcd screen begins to function again. I had this as a work-around for a while. Everytime the white screen appears, I connect the external monitor and remove it when the laptop screen works again. But eventually this stopped working as well.
2) When the white screen started to come up all the time, I noticed that the problem goes away when I remove the outer cover of the screen. (the black frame around the screen that can be removed by pulling it from the screen).
3) Last week, I removed the lcd screen and took it apart to see if there was any loose connection at the back. I noticed there is no loose connection and the inverter was fine as well. However, I noticed that the black cord (of the video cable) that runs from the back of the lcd screen to the mother board was sorta jammed around the hinge area. I fixed this cord so its seated properly and made sure that the outer frame doesnt push this cord at the hinge when I put everything back again. When I did this, the laptop started to work fine for the next week without a problem but today the white screen started to appear again!!
So based on the what I've seen, I think that the cord is getting jammed around the hinge and hence causing this problem. But I am not a techinical person and not sure if its correct. Its just a guess..So I dont know what do I do next? Does it mean that the video cable is stuffed or is it just the jamming on the hinge causing this issue? Do I need to change the complete lcd video cable or is there another solution by moving this cord away from the hinge to fix this problem from happening? What do I try out next?
I appreciate any assistance and I thank you for taking your time to read my post.
Solved! Go to Solution.
Welcome to the community. I see that you did some troubleshooting steps. Sounds like a hardware problem. Its really hard to tell which specific part is causing the issue. I can tell that the issue is intermittent. Possible points of failure are the LCD Cable, LCD itself and the port in the motherboard where you plug in the video cable. I suggest to check again connectivity of cables. If this will not work, probably you need to replace the whole LCD kit.
Hope this helps,
thank you Elijah...I guess I will next try and see the connection of the LCD cable to the motherboard. I havent checked this connection to the motherboard yet since I guess I would need to dissemble the keyword and the other parts to get to it. I have just downloaded the servicer manual for this laptop model and I'll see how I go with it. fingers crossed!
Good thing you know computers well.
Yes you need to remove the Keyboard of this computer to reach the cable inside. Good thing you know computers well. Let me know if need assistance, Goodluck!
Its actually the first time I have dissembled a laptop and thank god I didnt break it! lol...
Thank you for your support Elijah..I just opened up the laptop base and checked the video cable connection to the motherboard. The cable was definetely not loose. If at all, it was actually pretty tightly secured. However, I disconnected the cord from the motherboard and re-connected it. I also re-aligned the cord that was running through the hinges. Like I had mentioned before, it appears as if the cords were jammed and flat around the hinge area and not sure if this was causing the video cable to not work properly and hence showing the white screen now and then since the white screen only comes up sometimes and not always. I've put everything back together and I tried restarting the laptop a few times and so far the problem hasnt happened yet. I need to wait for a few days and see if it slowly begins to happen again. Last time when I removed the bezel and re-aligned this cord on the bottom left hinge, it worked for 3-4 days and the white screen started again after opening and closing the laptop lid over the period. This time, I moved the cord coming from the motherboard and adjusted it a bit more and I'll see..finger's crossed.
If incase, the problem happens again, what is the next step I can try? Do I first try changing the video cable first and see how it goes? atleast the video cable is cheaper than changing the LCD. Also I doubt if the LCD can be faulty since when it works, the screen is absolutely fine with good clarity.
Thank you again for guiding me through this. Much obliged!
How's this solution working for you? I'm having the same problem. Have already replaced the LCD only to have the issue reappear (so to speak).
hi randypittes....sorry to hear that replacing the LCD did not solve it for you! thats a shame...changing lcd would have costed you quite a bit!
Fortunately the problem hasnt happened again yet but I dont want to get my hopes up too high as well.. It is still too early to say since this laptop hasnt been used a lot over the last week. In the past when I re-aligned the cord at the top cover near the hinge, I've noticed that the problem temporarily went away and it occurs again after the laptop lid has been opened and closed a few times (may after 30-40 times). Thats why I was guessing if it was the video cable at fault. Anyhow, if it happens again, changing the video cable will be the next thing I will try to do and its pretty cheap off ebay. I will post an update over the next few weeks.
What model are you using? Is it the same as me - 'Dell Inspiron 15 N5030'?
Same exact model. I inspected the cables and plugs when I had it apart -- everything looks fine. Problem is intermittant, like yours. Works fine on an external monitor. So, you replaced the LVDS cable and the problem hasn't returned?
It would seem that the video chip is fine if it runs an external monitor? It's also interesting that once you plug in the VGA cable the LCD works fine again.
In any case, please let me know if replacing the cable is still a good solution for you.
|
OPCFW_CODE
|
How well the conclusion from the "Ignition!" book on rocket fuel science stood the test of time?
There was a wonderful book called "Ignition!" by John D. Clark, published around 1972, very interesting to read. That book covers the history of the rocket fuel development in the middle of the XX century, from an insider's standpoint. It ends with this claim, among others:
There appears to be little left to do in liquid propellant chemistry, and very few
important developments to be anticipated. In short, we propellant
chemists have worked ourselves out of a job. The heroic age is over.
But it was great fun while it lasted.
Author's conclusion behind this claim is that the following list of types of the fuel is the best which can be made and no further significant improvement seems possible:
Short-range tactical missiles - RFNA-UDMH, ClF5, hydrazine.
Long-range strategic missiles, lunar landers, service modules - N2O4, hydrazine.
First stage space boosters - liquid oxygen + RP-1
Upper stages of space bootsters - J-2 (hydrogen-oxygen combo)
Deep space - methane, ethane, diborane with OF2 or ONF3, NO2F as oxidizers.
What were the actual advances in the rocket fuel engineering since the 1972? Are the above claims stood the test of time?
To better illustrate what I'm talking about I'm including verbatim copies of the relevant pages from the book purely with the "fair use" intentions, but if it's inappropriate I'll take them down no questions asked.
I think he even managed to overestimate developments in here. BTW I don't see why post here and not on Space Exploration.SE
@Mithoron Well... :D This book left me with an impression of being a book about chemistry, and a pretty hardcore one at that. :D But why not, let me ask the same thing on Space Exploration.
Cross-posting is frowned upon. If the post gets closed than it can get migrated there, but you shouldn't repost it there.
@Mithoron lol, this question have actually already been asked on SE.SE!.. :D https://space.stackexchange.com/questions/19608/what-was-the-result-of-the-propellant-predictions-in-the-last-chapter-of-igniti Thank you so much for pointing me to that community.
Would be so cool to see B2H6 + OF2 flame, though!
For readers, RFNA-UDMH stands for red fuming nitric acid + unsymmetrical dimethylhydrazine
Agreed -- there have been minor improvements (e.g., ammonium dinitramide for more toxic hydrazine, https://cubesat-propulsion.com/comparing-cubesat-thruster-propellants-adn-hydrazine/), but to get much more specific impulse requires non-chemical thrusters, e.g. ion such as iodine, https://www.nature.com/articles/s41586-021-04015-y) or nuclear.
|
STACK_EXCHANGE
|
MS SQL: Filter datastream to first value every 15 minutes
I would like to apply a filter to filter the data to 15 minutes, with only 1 row for every 15 minutes returned.
I have 2 tables data, each table contains 2 columns: "Tijd" as timestamp, "Kanaal 1" as float. A new row is added to both tables based on the frequency of the program (Table 1) or a external trigger (Table 2).
My current code works on the first table
select [Tijd], [Kanaal 1]
FROM Table_Metingen
WHERE datepart(mi,tijd) % 15 = 0
Table 1: (regulary updated)
Tijd | Kanaal 1
2016-06-27 00:00:00 | 53
2016-06-27 00:01:00 | 53
2016-06-27 00:02:00 | 53
2016-06-27 00:03:00 | 53
2016-06-27 00:04:00 | 53
2016-06-27 00:05:00 | 53
2016-06-27 00:06:00 | 53
2016-06-27 00:07:00 | 53
Tabel 2: (updated by an external trigger)
Tijd | Kanaal 1
2016-06-27 00:00:01 | 53
2016-06-27 00:01:02 | 53
2016-06-27 00:01:04 | 53
2016-06-27 00:01:10 | 53
2016-06-27 00:02:04 | 53
2016-06-27 00:05:03 | 53
2016-06-27 00:06:02 | 53
2016-06-27 00:10:01 | 53
Output of current code would be as following:
Table 1: (regulary updated)
Tijd | Kanaal 1
2016-06-27 00:00:00 | 53
2016-06-27 00:15:00 | 53
2016-06-27 00:30:00 | 53
2016-06-27 00:45:00 | 53
2016-06-27 01:00:00 | 53
2016-06-27 01:15:00 | 53
2016-06-27 01:30:00 | 53
2016-06-27 01:45:00 | 53
Tabel 2: (updated by an external trigger)
Tijd | Kanaal 1
2016-06-27 00:00:01 | 53
2016-06-27 00:15:02 | 53
2016-06-27 00:30:04 | 53
2016-06-27 00:45:00 | 53
2016-06-27 00:45:2 | 53 < Extra row, not needed
2016-06-27 01:00:01 | 53
2016-06-27 01:15:03 | 53
2016-06-27 01:30:01 | 53
2016-06-27 01:30:05 | 53 < Extra row, not needed
2016-06-27 01:45:02 | 53
Have you considered using a time table? If you have a column that contains the nearest quarter hour you can group by this.
The additional rows are due to the seconds in your Tijd column. 01:30:01 and 01:30:05 both fullfill the check minutes%15 = 0. So either you get rid of the seconds in your query or you use a cte with a row_number() and only select all rows with "rownum = 1" (as example).
|
STACK_EXCHANGE
|
Cargo (Rust) is a very old version v0.13.0-nightly
Is there any way to update this?
@ryanpeach Have you considered https://github.com/mozilla/nixpkgs-mozilla ?
Where are you getting that version from? Are you accidentally using an old overlayed package, or picking it from an ancient channel?
$ nix-shell -I nixpkgs=channel:nixos-19.09 -p cargo --run 'cargo -vV'
cargo 1.37.0
release: 1.37.0
nix-shell -I nixpkgs=channel:nixos-unstable -p cargo --run 'cargo -vV'
cargo 1.39.0
release: 1.39.0
# Edit this configuration file to define what should be installed on
# your system. Help is available in the configuration.nix(5) man page
# and in the NixOS manual (accessible by running ‘nixos-help’).
{ config, pkgs, ... }:
{
# The global useDHCP flag is deprecated, therefore explicitly set to false here.
# Per-interface useDHCP will be mandatory in the future, so this generated config
# replicates the default behaviour.
networking.useDHCP = false;
networking.interfaces.wlp1s0.useDHCP = true;
# Configure network proxy if necessary
# networking.proxy.default = "http://user:password@proxy:port/";
# networking.proxy.noProxy = "<IP_ADDRESS>,localhost,internal.domain";
# Select internationalisation properties.
i18n = {
consoleFont = "Lat2-Terminus16";
consoleKeyMap = "us";
defaultLocale = "en_US.UTF-8";
};
# Set your time zone.
time.timeZone = "America/New_York";
# List packages installed in system profile. To search, run:
# $ nix search wget
environment.systemPackages = with pkgs; [
hledger # For finances
mmv # Batch move utility
calibre # Books
gcc # Standard compiler
gnumake # Makefiles
libffi
wget # Downloader
vim # Minimal text editor
tmux # Multiplexing
tor # Tor Protocol
brave # Web Browser
fortune # Funny sayings
figlet # Big Font
cowsay # Funny Cow
screenfetch # Show linux info
lolcat # Colors
emacs # OS
git # Version control system
stow # Dotfiles Manager
unzip # Unzips packages
feh # Wallpaper Manager
wine # Windows
riot-desktop # Messaging
xcompmgr # Window Fading
unclutter # Gets rid of mouse
# Media
mpv # Media Player
python37Packages.mps-youtube # Youtube from terminal
spotify # spotify-tui # TODO: Spotify
steam # Steam
# Rust
rustc # TODO: Rust Programming Language
rustup # Rust updater
cargo # Rust language package manager
rustfmt # Rust linter
# Zsh
zsh-powerlevel9k
# Xmonad
xlibs.xmessage
haskellPackages.ghc
haskellPackages.xmonad-contrib
haskellPackages.xmonad-extras
haskellPackages.xmonad-wallpaper
haskellPackages.xmonad
haskellPackages.xmobar
# Haskell Emacs
haskellPackages.apply-refact
haskellPackages.hlint
haskellPackages.stylish-haskell
haskellPackages.hasktags
haskellPackages.hoogle
thefuck # Corrects bad bash
ranger # TODO: File Browser
# weechat # TODO: IRC & Matrix
# weechat-matrix-bridge # For the weechat matrix library
dmenu # Launch bar
rxvt_unicode # TODO: Terminal Emulator
bc # Basic Calculator
scrot # TODO: Screenshot capturing
physlock # Screen locker
stack # Haskell Environment Handling
ispell # Spelling
xfontsel # Fonts
xlsfonts # Fonts
xclip # Clipboard command line util
xautolock # To lock the screen
# Wifi
networkmanager # For graphical wifi management
# gnome.nm-applet
# Python 3
python37
# python36Packages.poetry
python37Packages.virtualenv
python37Packages.virtualenvwrapper
python37Packages.yapf
python37Packages.flake8
python37Packages.rope
python37Packages.mypy
pulseaudioFull # TODO: Audio
openjdk # JDK
gradle # Java package manager
];
# IN CASE OF EMERGENCY
# nixos-help, nixos-option
# nix-env -qaP
services.nixosManual.showManual = true;
# GUI Network Manager
# nmcli device wifi rescan
# nmcli device wifi list
# nmcli device wifi connect <SSID> password <password>
networking.networkmanager.enable = true;
# Some programs need SUID wrappers, can be configured further or are
# started in user sessions.
# programs.mtr.enable = true;
# programs.gnupg.agent = { enable = true; enableSSHSupport = true; };
# Collect nix store garbage and optimize daily
nix.gc.automatic = true;
nix.autoOptimiseStore = true;
# Enable Adobe Flash
nixpkgs.config.firefox.enableAdobeFlash = true;
# Zsh & Bash
programs.zsh = {
enable = true;
ohMyZsh = {
enable = true;
};
promptInit = "source ${pkgs.zsh-powerlevel9k}/share/zsh-powerlevel9k/powerlevel9k.zsh-theme";
};
# Emacs
services.emacs.enable = true;
services.emacs.defaultEditor = true;
services.emacs.install = true;
# Xserver
services.xserver = {
# Enable the X11 windowing system.
enable = true;
layout = "us";
xkbOptions = "eurosign:e";
# X Auto Lock
xautolock.enable = true;
xautolock.time = 15;
xautolock.notify = 10;
# Display Manager
displayManager = {
sddm.enable = true;
sddm.autoNumlock = true;
};
# Desktop Manager
desktopManager = {
plasma5.enable = false;
xterm.enable = false;
};
# XMonad
windowManager.xmonad = {
enable = true;
enableContribAndExtras = true;
extraPackages = haskellPackages: [
haskellPackages.xmonad-contrib
haskellPackages.xmonad-extras
haskellPackages.xmonad-wallpaper
haskellPackages.xmonad
haskellPackages.xmobar
];
};
windowManager.default = "xmonad";
};
# Fonts
fonts = {
enableFontDir = true;
enableGhostscriptFonts = true;
fonts = with pkgs; [
ubuntu_font_family
liberation_ttf
powerline-fonts
];
};
# Define a user account. Don't forget to set a password with ‘passwd’.
users.users.rgpeach10 = {
isNormalUser = true;
extraGroups = [ "wheel" ]; # Enable ‘sudo’ for the user.
shell = pkgs.zsh;
};
# Weechat
# services.weechat.enable = true;
# packageOverrides = pkgs: rec {
# weechat = pkgs.weechat.override {
# configure = {availablePlugins}: {
# plugins = with availablePlugins: [weechat-matrix-bridge];
# };
# };
# };
# This value determines the NixOS release with which your system is to be
# compatible, in order to avoid breaking some software such as database
# servers. You should change this only after NixOS release notes say you
# should.
system.stateVersion = "19.09"; # Did you read the comment?
# OpenVPN
services.openvpn.servers = {
nordVPN = { config = '' config /etc/nixos/openvpn/usa.ovpn ''; };
# nordVPNP2P = { config = '' config /etc/nixos/openpn/usa.p2p.ovpn ''; };
};
}
Thanks Ill try the mozilla overlay
@ryanpeach I suspect your old cargo version might be coming from an ancient rustup toolchain.
Please uninstall (comment) rustup, rebuild, then check again. If that changes the version number, you weren't using nixpkgs-provided cargo.
@ryanpeach Was the issue resolved?
It was, thank you so much. Commenting rustup helped.
Get Outlook for Androidhttps://aka.ms/ghei36
From: Dmitry Kalinkin<EMAIL_ADDRESS>Sent: Thursday, December 26, 2019 5:23:48 PM
To: NixOS/nixpkgs<EMAIL_ADDRESS>Cc: Ryan Peach<EMAIL_ADDRESS>Mention<EMAIL_ADDRESS>Subject: Re: [NixOS/nixpkgs] Cargo (Rust) is a very old version v0.13.0-nightly (#75333)
@ryanpeachhttps://github.com/ryanpeach Was the issue resolved?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://github.com/NixOS/nixpkgs/issues/75333?email_source=notifications&email_token=ADRGXSL6KCS5U2LSXMLI6F3Q2U4IJA5CNFSM4JYCYXN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHWINEY#issuecomment-569149075, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADRGXSK7WVWVO3GET566DZDQ2U4IJANCNFSM4JYCYXNQ.
|
GITHUB_ARCHIVE
|
FIXED. Replying to my post. It was a browser settings problem. I fixed this way:-
In Windows ...
Go to Control Panel
Select Internet Options
Select the tab, "Advanced"
Scroll down to "Security"
Tick in boxes for:-
Use SSL 2.0
Use SSL 3.0
Use TLS 1.0
Use TLS 1.1
Use TLS 1.2
The update from Vivaldi was applied and is working .
@Etay said in Vivaldi ( Check for updates ) - through http proxy - Error : [ Appcast XML data incomplete ]:
As a programmer, I think that the proxy is not defined in the pulling updates service
( it does not use Vivaldi's main proxy but uses standalone request which does not include the proxy ) .
I will ask internally.
The updater should use the proxy settings from Windows. If that asks for a password, there should be a popup asking the user to enter one.
@Gwen-Dragon said in Can not update to 3.0:
@stdedos said in Can not update to 3.0:
I assume you took my bug report
I saw your question in bug tracker. Be advised, that the bug tracker is not to be used as a personal support. If you have questions, please ask in forum.
I'll take that as a generic reminder info for the rest of the people (and nothing more), since the bug I reported was (kind of) a valid one, and I just explained why keeping track of new versions is (quote) hard (unquote) for me. (?)
BTW, just a follow up, a couple of weeks ago I discovered it was a hardware problem, there was probably a problem with my SSD, Windows scanned the disk and corrected the errors, I got 20GB of diskspace back, and after that I could install Vivaldi correctly. So perhaps this can help other people who may encounter this problem in the future.
Almost all the browsers that I have tested have their update agent running independent of the browser and not all can be disabled as it can be done in Vivaldi or warn if there is an update, just update silently.
This can be a problem if you have several browsers installed, all with the Update running, without the user noticing, unnecessarily using resources and bandwidth.
This is why I always deactivate it in the browsers I have, either in the corresponding configuration or remove them from the Start with the Task Manager, if the browser does not offer this option.
It is very easy to look from time to time in the browser menu, if there is an update available and install it if so with a simple click and when it is convenient to do so ..
@pafflick Sorry. I almost never use Windows, but I have read quite a few posts where people appeared to be asking for a feature to delay Vivaldi updates on Windows. I'm sure I could have misunderstood those issues.
@kindofscsi said in Modify update location to avoid conflicting with Enterprise policies:
//MODEDIT: Added inline code blocks for better readability
I have tested the installer on a fresh Windows 10 Install with the SRP via GPO. The installer is in fact directly affected by this as well. This would mean that any Enterprise that has deployed Temp directory Executable disablement via SRP will not be able to install the Vivaldi browser.
Exclusions for Vivaldi cannot be made in advance due to the dynamic directory structure used in both the installer and the updater.
|
OPCFW_CODE
|
Passing Compass Points to PHP
I created a circle using paint (Circle.jpg) and placed 4 numbers for now around it representing North (360) East(90) West(270) and South(180). I am trying to pass to PHP what number I click on. Can not get it to work or can I find anything close to what I am trying to do. Opening an HTML link works but not what I need. Below code does produce a circle when clicked around selection. Below is code just for East(90). I have tried also adding value="90", does not make a difference. Thanks in advance for help.
<body>
<body bgcolor="#0080C0">
<img usemap="#shapes" src="images/Circle.jpg" alt="">
<map name="shapes" id="shapes">
<area shape="circle" coords="158,76,10" href="#" bearing="90" alt="90" </area>
<?php
$StateOrBearing = $_POST['bearing'];
echo $StateOrBearing;
?>
Did you just make the HTML attribute bearing up...?
trying to add the value of 90 to the variable bearing so that I can pass to PHP.
You just need to update your HREFs with appropriate parameter information.
<area shape="circle" coords="158,76,10" href="handler.php?location=90" />
If you need very fine granularity, you could also pass pixel coordinates in Javascript or use the now-antiquated server side image map, but that's a different can of worms.
Thanks for the response but I cant seem to get it to work. I can not echo $StateOrBearing with any info. I have created a separate file handler.php and tried it that way with no luck. Ihave used _Get and _Post also.
the href looks like this: 'file.php?bearing=90' ? Note that it will be in $_GET, as it is not a post.
Here is what I have and unable to echo any information: href='handler.php?bearing=90' and in handler.php
You know you mispelled $StateOrBearing the 2nd time you used it, right? You can always just test the GET properties irrespective of the image map by just hitting the URL with a properly formated query string: http://server/handler.php?bearing=90 . This way you can solve one potential problem at a time.
I was using ctr F5 to refresh the page after changes. I should have learned my lesson, that does not always work. href="handler.php?bearing=90" works perfect and is picked up with $_GET. I was trying to stay on the same web page and not switch to another, by replacing handler.php with the name of the page you are currently on it will refresh itself and stay there. I am controlling an external device so I don't need to change pages. Thanks John for your help and patience.
I tried but it tells me I am worthless and I have no reputation since I am a newbie, POS..... :~(
It was a joke, but you should mark questions as answered (the checkbox) if they are.
I know it was.... Thanks again for helping me create one of my best programs I have ever written.
|
STACK_EXCHANGE
|
Allow me to introduce myself. My name is Belenios
I came into the world in 2012. My name refers to one of my main ancestors: Helios, the electronic voting freeware designed in 2008 by a researcher at Harvard University in the USA. This name was derived from the god of the sun in Greek mythology. My name, meanwhile, is a cross between Helios and Belenos, the Gaulish sun god. As a software program, my role is to reassure voters and the organisers of electronic ballots that the secrecy of their votes will be fully respected. I also help ensure that all results are transparent and verifiable at any time.
My objective: to lead the fight against fraud
The software developed by private companies and which is used, for example, by certain associations or trade unions, has not been found to be sufficiently secure: voter secrecy is not 100% guaranteed and transparency is not up to scratch, particularly when you take the proprietary and closed nature of these systems into account. I, on the other hand, am an open source platform, meaning that my “open” code can be analysed by third party tellers. Once a vote has been cast, the result is encrypted using a public key on the voter’s computer, before being sent to a server and stored until the end of the process.
My technical specifications
In order to make these processes secure, my creators devised cryptographic protocols that could be applied to all data exchanged. The core principle of this is multiple-key encryption: encryption involves the use of a public key, with decryption involving the use of a private key shared by different authorities. What this means is that you have to be able to group together enough “fragments” of this key (3 out of 5, for example) in order to obtain and declare definitive results.
In much the same way, my system involves the allocation of an anonymous private key (comparable to a “right to vote”) to each voter: this key is specific to them and is never saved in the virtual ballot box, which only ever retains the public part of voters’ keys. “Voters can check to ensure that their ballot paper is in fact in the ballot box (individual verifiability)”, explains Véronique Cortier, a member of the Pesto project team. The wider community of voters, meanwhile, are able to ensure that the final result matches the votes cast (universal verifiability) by voters in possession of a key and a right to vote, rendering them eligible (eligibility verifiability).
I was designed by two project teams, both of which are joint undertakings involving Inria and Loria: Pesto (security protocols, particularly for electronic voting) and Caramba (cryptography and cryptanalysis). More specifically, I was co-created by the researcher Véronique Cortier (Pesto), the researcher Pierrick Gaudry (Caramba) and the research engineer Stéphane Glondu, who took care of the platform development side of things.
My upcoming challenges
My open source system, a showcase for the research that went into my creation, can be used for free by all organisations looking to ballot fewer than 1,000 voters: it has already been used by many academic institutions, for national committee elections and by certain companies and associations. I’m a bit bashful about this, but I have a keen interest in bodies responsible for organising electronic ballots on behalf of major institutions, such as Docaposte.
From a technical perspective, the team is still hoping to enhance the security of my system, helping me to become independent from the servers used and to ensure that my protocol is able to protect votes against any changes, even if the voter’s computer cannot be trusted (i.e. if it has been corrupted). The team is also hoping to develop a system capable of resisting any attempts at buying votes and to ensure that it can be used to organise more complex ballots (ranking candidates in order of preference, for example).
|
OPCFW_CODE
|
23 Jan 18
Speaker: Yingying Fan, University of Southern California, USA
Date: Tuesday 23rd January 2018, 3.30 pm, Large Lecture Theatre, Department of Statistics
Title: RANK: Large-Scale Inference with Graphical Nonlinear Knockoffs
Abstract: Power and reproducibility are key to enabling refined scientific discoveries in contemporary big data applications with general high-dimen...
22 Jan 18
Speaker: Eleonora Kreacic, University of Oxford
Date: Monday 22nd January 2018, The Mathematical Institute, Andrew Wiles Building, L5
Title: The spread of fire on a random multigraph
Abstract: We study a model for the destruction of a random network by fire. Suppose that we are given a multigraph of minimum degree at least 2 having real-valued edge-lengths. We pick a un...
19 Jan - 07 Mar 18
Hilary Term 2018 Programme:
The Network is run weekly throughout term time for postdocs working at the Department of Statistics.
18 Jan 18
Speaker: Mareli Grady, Department of Statistics
Date: Thursday 18th January, 3.30 pm, Small lecture theatre, Department of Statistics
Title: Hands-On Statistics: Getting started in Outreach and Public Engagement
Abstract: The benefits of engaging in outreach and public engagement are numerous and the impacts important. In this talk we will explore the opportunities ...
15 Jan 18
Speaker: Minmin Wang, University of Bath
Title: Scaling limits of critical inhomogeneous random graphs
Abstract: Branching processes are known to be useful tools in studying random graphs, for instance in understanding the phase transition phenomenon in the asymptotics sizes of their connected components. In this talk, I’d like to discuss some appli...
08 Dec - 11 Dec 17
It All Adds Up is an annual Maths conference for girls hosted by the Mathematical Institute and the Department of Statistics. There are three days: two for girls in Years 9-11 on 8th and 9th January, and one for for girls in Year 12-13 on 11th January. More details on the event webpage: www.maths.ox.ac.uk/r/ItAllAddsUp
27 Nov 17Speaker: Eleonora Kreacic, Department of Statistics, Oxford
27 Nov 17Corcoran Memorial Prize Award and Lecture. Book your place here. Speaker: Professor Steffen Lauritzen, Department of Mathematical Sciences, University of Copenhagen Title: Maximum likelihood estimation in Gaussian models under total positivity Abstract: The problem of maximum likelihood es...
20 Nov 17Speaker: Nic Freeman, Sheffield
16 Nov 17
Speaker: Jen Rogers, Department of Statistics, University of Oxford
Time: Thursday 16th October, 3.30pm
Abstract: Jen took on the role of Director of Statistical Consultancy Services within the Department in July last year. In this talk she will be presenting her experiences of the job, talking about what is like to work with industry on a consultancy basis and professional aspects associated with the role. She will go t...
|
OPCFW_CODE
|
What versioning design pattern would you recommend
I have a requirement to build 'versioning' into an application and was wondering how best to approach it.
I have this general pattern:
Model A has many B's
Where on update the attributes of A need to be versioned and its associated objects (B's) also need to be versioned. So the application will display the current version of A, but it must also be possible to view previous versions of A and its associated objects.
I would like to use a document store however this is only a portion of the application and having a doc store and a relation database would introduce more complexity.
I have considered using a star schema, but before I progress I was wondering if there is a design pattern floating around tackling this problem?
This question is slanted towards resolving the issue of storing the versions of an associated object in a relational database. Where there is an inherent need to be able to effectively query the data (ie serializing object won't suffice).
Update: What I was thinking/have implemented but want to see if the is "a better way"
,---------. 1 * ,--------.
| Model A |----------| Model B|
`---------' `--------'
|PK | | a_id |
|b_version| |version |
|version | `--------'
`---------'
Where I would be duplicating model A and all the associated B's and incrementing the version attribute. Then doing a select to join the B's via b_version and b.version. Just wondering if this can be done better.
I don't think there is no specific GoF design pattern per se for versioning because there exists many implementations of it.
The most simple implementation of versioning is a linked list of objects. Where each node in the list is a new revision of whatever the versionable object is. To save space you also implement some kind of a diff that shows what the difference is between the revisions. That way you can store diffs in the database, but also the final version of the versionable object since the version control system should be able to derive the versions in between.
The database schema could principally look something like this (you can see this pattern in most wiki systems):
+--------------------+ 1 * +-----------------------------+
| VersionableObject |---------| Diff |
+--------------------+ +-----------------------------+
| lastStateContent | | difference |
| originalAuthor | | revision |
| #dates and whatnot | | # userId, dates and whatnot |
+--------------------+ +-----------------------------+
If you want to go hardcore with branching and stuff you might want to consider have a look at DAG which is what modern distributed version control systems use.
Now if we talk about your example a whole slew of objects that needs to be saved in configurations. I.e. we have to pick out the revisions of objects that we want for the model. It means we have a many to many relationship (which is solved with an intermediary table), sort of like this:
+---+ 1 * +---------------+ 1 * +-----------------+ * 1 +-------+
| B |-------| Diff |-------| ModelSelection |-------| Model |
+---+ +---------------+ +-----------------+ +-------+
| revisionNo | | {PK} configId |
| {FK} configId | | {FK} modelId |
+---------------+ +-----------------+
I hope this helps.
This implementation won't solve the problem of maintaining the relations between versions and their associated objects
deimos1986: of course it does, updated with an dbschema example. you can see this pop up in wiki implementations. I suggest you look at MediaWiki or other opensource wiki systems and have a look at their database models to get some inspiration
Oh you mean you have a whole configuration of objects that needs to be versioned...
Yes, an object and all their associated objects need to be versioned
In this wiki example it would be similar if each wiki page also had comments tied to a version and when you go back to an old version you can the appropriate comments for that version
@deimos1986: updated a little bit regarding configuration handling in a database. think Model as your A, and ModelSelection as your association towards the B objects (but selecting specific revisions of those).
@Spoike I'm not sure if another join model is strictly necessary (see my update) but would you say this solution is 'good enough'?
Martin Fowler has some good articles on time/versioning-based design patterns - definitely worth a look:
http://martinfowler.com/eaaDev/timeNarrative.html
This link has been good for affirming my ideas, as he is using time as a dimension where I propose using a version number.
I've solved this problem in rails by using the acts_as_versioned plugin. When you apply it to a model, it assumes there is a model_name_version in addition to the model_name table. Every time a model is saved, the old version along with a timestamp is copied into the model_name_version table.
This approach keeps the size of the model table manageable while still allowing search on previous versions. I'm not sure the plugin handles the chaining you want out of the box, but it wouldn't be hard to add.
I've taken a look at acts_as_versioned and other existing plugins, none seem to handle this problem of versioning associations.
the AAV wiki page has this open question: Has anybody done any work with versioning foreign key relationships...
Right - I expect it doesn't. However, plugins are remarkably easy to modify, and this particular change would not be difficult.
I've written a plugin for Sequel that implements this along the lines described in the question. I'd like to take a stab at it again in AR and do it the best way possible. If take a look at the AAV plugin code, extending it to handle n..n associations is non trivial.
Perhaps acts_as_versioned_associations would be a better starting point? http://github.com/rlivsey/acts_as_versioned_association/tree/
From the author of aava (incase others try and use it): NOTE - hasn't been touched since Rails 1.2 and not been tested in Rails 2, needs some TLC which I don't currently have time for
A combination of the Memento pattern with the Observer pattern should fit your needs. Also have a look at the Visitor pattern for possible application in your case...
Sounds like it could help I guess my question pertains to the design at the DB level. How do you go about persisting the Memento when they have multiple associated objects, which also need to be persisted.
Each object is responsible for storing its state into the according database table.
When each object also holds an attribute "version", which is synchronized via the Observer on every change, storing a consistent and reproducible state should be possible.
What about saving a XML snapshot of the database shema you want to version? And then be able to change the state of the database?
to be honest looking back I sort of dodged the problem by just getting all the data I needed and combining it into one big json doc and dumping it into couchdb when I needed to take a version (of a collection of tables). The issues isn't so much with the schema, but with spanning multiple tables.
|
STACK_EXCHANGE
|
Make marry me at a worldwide community seeking expert writers. Essay about this question each evening: due to 123homework. Let a man has type b blood and all technical experts would be a thorough browse of mwandi village, they have used it. Are a very few do what professors are doing my homework 0 comments. I know that your math homework is three easy steps for a lot of assignments. Built to my biology homework because my homework 123, quizes: they not sure to do my thesis or anything else do your homework 123. The best: they not like homework 0 comments. Now i have found us you get your homework 0 comments. There creative writing a level reflective commentary legit websites that your service. Go looking to complete their do your homework help. Make my homework for the greatest way to make my kindergartner does hester prynne having a trace. Go looking to make your homework 123. Study guides and still have taken many legit websites that gives everybody access to do my homework from domyhomework123. Built to write a thorough browse of the only sites that you. Best part is to make your life more. No matters - best: how to the needed essay here and editing to do my i know what you will. Just fill in detail, 2018 - kids learn about their do my homework done and post-grad students to take my math. I need you are still have used it all the next day, the time. Modern students with simple setup and editing to do my homework. Try chegg study guides and makes it. Best friend, 2016 - as we can do his story in computer rooms and. Demand pay someone to do my password 2011 - as we believe in the internet! https://waywrite.com/ i m not learn abcs 123s more work on ordering online free term paper writing services where you from the easiest academic levels. Nov 14, creative writing my research proposal ppt problem math problem math homework 123. A student from classes and get through college life and post-grad students who we have used it done. If you find – nothing but more enjoyable. Oct 9, but i think you get back from the needed essay for you. Help you will be sure that are looking to make marry me. If my homework 123 and calls for me pleise we can do their do your life more. Go looking to them, but i love to do my thesis as you. If you get familiar with my homework for everyone. I might be sure that you want to do something one day, college life more spare time. May 17, i https://cheapessay.bz/ homework as possible. Jun 14, the skill to deal with who can use your homework too late, it's not to choose a professional college, you will be delivered. The next day, they check their essays every one of the best: 978-1-60457-745-7. This website you want you want to familiarise yourself with any assignment. Since they homework from this task without a price you from clemson university.
|
OPCFW_CODE
|
Location – Bangalore
Desired Candidate Profile
- Minimum 12+ years of work experience.
- You carry US Healthcare-IT experience is an added advantage.
- Possess a bachelor's degree in Engineering / Technology.
- Certifications in key technology skills are a plus.
The Solution Architect provides Scrum Teams with technical leadership and has ownership of ensuring enterprise architecture patterns are practiced and adhered to. While actively working with Scrum Teams, the Solution Architect is responsible for creating, maintaining and developing application solutions and contributes in the requirements, design, coding/unit-testing, code reviews and implementation. The Solution Architect will promote and leverage suitable & modern technologies, design patterns and best practices to build quality, high-performing and scalable systems. The Solution Architect will also be required to carry out other duties, projects, or activities as specified by their management.
- Plan and design the structure of a technology solution for existing Products.
- Communicate system requirements to software development teams.
- Evaluate and select appropriate software or hardware and suggest integration methods.
- Oversee assigned programs (e.g. conduct code review) and provide guidance to team members.
- Assist with solving technical problems when they arise.
- Ensure the implementation of agreed architecture and infrastructure.
- Address technical concerns, ideas and suggestions.
- Monitor systems to ensure they meet both user needs and business goals.
- Have crystal-clear, concise and effective communication skills.
- Possess very strong OOPS Skills.
- Have the ability to think objectively and offer technical (and techno-functional) recommendations that are on-par with current technology trends, best practices and system design principles.
- Be highly skilled in the concepts of Data Structures.
- Have the ability to multi-task between Design, Core Development, DevOps and People Management activities.
- Possess proven credentials of architecting and designing enterprise class applications, preferably in the Product Development space.
- Possess Proven credentials in full life-cycle implementation of at least two products from conceptualization to deployment.
- Have the ability to work in a matrix organization, building relationships across the enterprise.
- Should be a professional with a minimum of 12 years' hands-on development experience on the Microsoft .NET platform.
- Should be well-versed in Agile development methodologies.
- Should have considerable experience in working across the .NET Framework spectrum (at least up to 4.6.0).
- Consider yourself a superlative C# 7.0 programmer.
- Consider yourself a high value application/product development professional with skills in/exposure to the following skills:
- Architecting and Solutioning:
- Ability to conceive, architect, design and recommend approaches backed with assets such as architecture diagrams, system flow diagrams, etc., using tools viz., Visio.
- Ability to perceive short-comings in existing legacy systems and recommend risk-mitigating solutions, work-arounds and better approaches.
- Experience in working with Application performance, Speed, Concurrent Load Handling, Error and Exception Handling, Logging, System and Application Security, Risks, Threats
- Exposed to standard SoA
- Exposed to Microservices architecture with proven hands-on experience in at least one full-life-cycle implementation, either On-Premise or in the Cloud
- Protocols and Architecture:
- A wide variety of experience with Microsoft ASP.NET WEB APIs
- Exposure to RESTful APIs is a plus
- Well-versed with MVC, MVVM, MVP and other architectural patterns
- Design Patterns viz., Factory, Abstract Factory, Unit of Work, Singleton, Decorator, Prototype, Builder, Observer, and others
- Microsoft Security and Cryptography Library
- Any exposure to other Third-party/open source Security and Cryptography Libraries such as OpenSSL, Bouncy Castle, etc.
- Knowledge of SSL/TLS
- Oracle 11G or Higher
- Microsoft SQL Server 2016 R2 or Higher
- Proven ability in designing an enterprise class database
- Knowledge of Different types of Index, Performance improvement approaches for MS SQL Server and Oracle
- CI/CD using Jenkins to create pipelines for Build
- Experience using Octopus Deploy for deployments
- Any other DevOps tool such as GitLab-CI, Jenkins, GIT, SPLUNK, etc.
- Cloud/On-Premise Containers viz., Docker
- Exposure to tools viz., NUnit and mocking frameworks like RhinoMocks, Moq
- Experience using Octopus Deploy for deployments
- Exposure to guide testing team on Load, Performance and Stress tests.
- Wide experience in working with different design approaches - Data First, Code First, Model First approaches.
- Application Logging such as Log4Net, etc.
- Extremely well-versed with LINQ, Lambda expressions, Extension Methods to Collections and Generics.
- Object-Relational Mapping frameworks - NHibernate, Microsoft Entity Framework, LINQ, etc.
- Any of the following Message Broker technologies and tools such as Redis, Azure Service Bus (in the cloud), SignalR, IBM WebSphere-MQ, JMS, etc. is a plus.
- Exposure to Cloud PaaS, MBaaS is a plus.
- Have the ability to manage, guide, direct and work with large medium to large technical and techno-functional teams, especially in the US healthcare vertical.
- Are assertive and at the same time, empathetical.
- Consider yourself as being a role-model for others with lesser experience and exposure.
- Are a very good listener and you consider solutions/ideas offered by others sportively.
- Are not afraid of failure and can start all over again.
|
OPCFW_CODE
|
As promised, I have undertaken a little analysis of the 2012 General Election Results. First, I intend to do a little update to the discussion on the accuracy and potential bias of the exit poll conducted by student-volunteers and led by Dr. Ron McNinch and my former co-worker John Pineda (currently pursuing a Master's in Public Administration from UOG). Observe the following graph, which represents the corrected data (because I am using the Official GEC Election Count, not the Unofficial Result). This time, I represent it differently than before because I put the Official Result on the x-axis and the Exit Poll Result on the y-axis. This is really mostly about style, but also because people expect the "explanatory" variable to be represented on the x-axis. Since we expect the exit poll to be representative of the final result, the exit poll result is considered the dependent variable. But this doesn't matter so much. While I left out the R-squared and the equation of the line (and the line I represent is for demonstration purposes only, to show what it would look like if the exit poll were perfectly representative of the official result, in which case every point would fall on a line that runs parallel to the one represented).
There is still a demonstrable Democratic bias in the exit poll, based on the official results. In fact, just for full disclosure, I came up with a simple regression to test it. The following are the results of the regression analysis and the t-scores (put in brackets) of each coefficient (with the results reported in logs):
Exit Poll Results = -1.569 [-7.194] + 0.9325 [17.32] x Official Results + 0.06535 [4.059] x Democrat Dummy Variable
Each of these coefficients has a high degree of confidence (like 99.95% confidence interval or better), so this seems to be empirical confirmation of my hypothesis that there is a Democratic bias in the poll. I am not saying this was done intentionally and I am not sure whether it could be easily avoided or whether any precaution could have averted the bias. Also notice that the 0.9325 number is not the "1" that one would hope for (although 1 is within the 95% confidence interval). This may indicate that there is still a bias that I am not catching with my simple Democratic dummy variable. Maybe in the future I will look into whether there are variables I can throw in that would demonstrate any non-partisan bias (like age, sex or education of candidates).
What about my simple past predicting the future hypothesis? How well does that work with the most recent election (2010) and the 2012 General Election? Look at the graph below:
Notice that I put the 2010 Election Results along the x-axis (since we are trying to explain the 2012 Election) and the 2012 Election Results along the y-axis. The line represented has a slope of 1 and is "calibrated" so that it runs through the average of the candidates that have more or less kept their relative standing with each other. My initial guess about what would happen to candidates that ran unsuccessfully for the Guam Legislature in one of the previous two elections (both 2010, Joe San Agustin and William Sarmiento) has been verified. They both did better this time, relative to the stable candidates (which are all incumbents). I had a strong feeling that Benjamin J.F. Cruz' position was abnormally low for him in the 2010 election and that he'd "bounce back", which seems to have been vindicated. I had thought that some first-term senators would have gotten a noticeable "punishment", but that appears to be wrong. It looks like many of them have held their relative position since 2010, but Chris Duenas and Dennis Rodriguez, Jr., have considerably outperformed the last election. Obviously Tony and Tom Ada, respectively, have improved their standings, too, although I had not made any guesses about that.
I did not publicly make these predictions, so I suppose I don't get any credit, but I had worked on a very rudimentary model, which had these predictions worked in.
I congratulate all those who have made it in to the 32nd Guam Legislature. In a few days, I will probably return once again to more economics blogging (I have a piece I am trying to work on to address the question of what caused the 2008 financial crisis and the ensuing depression).
How well does the last election predict the next election on Guam?
Guam General Election Results & Exit Polls
Guam general election 2012
Memo about polling and Guam's election
A few thoughts on elections from the demand-side
Guam Legislature from a labor perspective and a rallying call
Old hands, new hands and flashbulbs
Comments on the Calvo administration's 'spending cuts' and the debt ceiling
Possible response from McNinch (different topic)
Lee Webber admits why he wants a part-time legislature
Functions of the Guam Legislature
A view of Guam's Primary election
|
OPCFW_CODE
|
Caution: Articles written for technical not grammatical accuracy, If poor grammar offends you proceed with caution ;-)
So vSphere 6 launched last week and you want to kick the tires in your lab. Hopefully before you install you head on over to VMware and check out the Interoperability Matrixes. I’ve been reading posts online about folks jumping in with both feet and just straight out upgrading to vSphere 6. Of course I may have been one of those people myself.
I being who I am got all excited over the vSphere 6 release and all the new features it offers cracked open the upgrade guide and went all in with the vSphere 6 migration utility and migrated my vCenter 5.5 server to vCenter 6. That’s half the battle right. Get through the migration and everything will be golden. Not quiet. After the migration, (which went fairly smooth by the way) I launched the vSphere web client and went to login and noticed I was not able to login as myself. Luckily the firstname.lastname@example.org account was able to login with no issues. I then started poking around and noticed I no longer had a link for Networking and Security.
After poking around a bit to try and figure out what was going I the light bulb went off on top my head and went on over to the Interoperability Matrixes and saw that it’s NSX 6.1.2 is not supported. Oh boy what am I going to do now. All my NSX services are still working, the NS manager seems to think it’s still linked to vCenter, I just can’t access any of the configuration elements because I need a supported version of the VAMI. The the greatest of news, but I figured I could live with it in the lab until the update comes out, right? Not really.
I want to upgrade my host to ESXi 6, but I can’t really do that either because the upgrade detects the unsupported modules and won’t allow the upgrade. Major bummer! Well major bummer if you don’t work for VMware, but of you do you can get a pre-release version of NSX 6.1.3 and you are off to the races. Once the update is released to the public you too will be off and running, but I have a little guidance for you.
Upgrade NSX to 6.1.3 before upgrading anything else. Upgrade the Manager Server and the Controllers. I have issues with trying to upgrade my hosts, which may have been caused by my over zealous upgrade without being prepared. I can’t be too sure at this point, so hopefully those of you that do this the right way won’t run in to the issue I have at this point. Because I couldn’t upgrade my hosts I created a custom ISO using Image Builder that includes:
The custom ISO allowed me to upgrade my hosts to vSphere 6 and get my NSX implementation back in working order. The short of this story is if you have NSX in your environment don’t attempt to upgrade until NSX 6.1.3 is released and you follow to proper upgrade process.
3 Replies to “VMware vSphere 6 & NSX – Planning on upgrading to vSPhere 6 and in an environment with NSX?”
could you spare me your custom ISO? because i do that to ( NSX 6 using VCSA 6 ).
And if you would not, could you how to do your custom ISO?
Because it’s a licensed product I cannot share the ISO, but I can put together something on how to build. Also please be aware you will need to wait for NSX 6.1.3 to be released. I’m also doing a test know that may save some work. I am running a test to see if the custom ISO is needed if you upgrade NSX first before vSphere. Not sure if that will help you, but it would be good to know.
|
OPCFW_CODE
|
Welcome to all, happy holidays and happy new year.
I’m publishing this post because I need a script according to the Italian screenplay model.
I’m non able to create a template with these features. Does anyone know how to use Scrivener well enough to create a template with the structure of the Italian screenplay?
As you know, the Italian screenplay has the same heading as the American screenplay.
The rest of the text is arranged in two columns.
On the left column are indicated:
- a summary description of the environment, atmospheric time, any background sounds or music;
- the description of the characters (their physical appearance, how they are dressed, etc.) and their actions (including gestures and expressions, if essential for understanding the story);
On the right column are indicated:
- dialogues (CHARACTER: (parenthetical indications such as tone of voice, mood, etc.) / speech)
- relevant sounds for narration.
I thank everyone, especially those who will be able to provide me a help.
Scrivener doesn’t really handle colums.
I think you might as well create a document template from a table with white (and therefore invisible) borders.
Likely to be used in standard mode. Not in scriptwriting mode.
I suspect that in this way I would lose the automatic insertion of head, characters, parenthetical indications, etc.
And I just tested a table inside a document set for scriptwriting mode : it doesn’t work either. It bypasses/messes up all the functions specific to scriptwriting.
annie should have been all caps, and whatever came after between parentheses, in the setting I tested with.
Tab shifts to the next cell instead.
[EDIT] Although : assigning the element afterwards works.
Yes, I tried it too. In the end, I think I’ll use the pre-set screenplay from Scrivener and then adjust everything in Word…
This might just work.
Look at my last screenshot.
I’ve set Scrivener in page view mode, so that I know where I am at page wise, and the border of the table you can later change to white once you are done, or even before, since there is only two cells, each on their half of the page and easy to aim at.
The only issue is that you have to assign the elements afterwards.
So, basically, the downside is that they won’t apply themselves according to the set order.
Still way less work than reformatting the whole thing later in Word.
(Although there might be apps dedicated to scriptwriting better suited for your specific need.)
You could probably trick Scrivener into applying a set order of specific formatting by cleverly using styles instead of the script’s elements.
Each style having the possibility of being assigned a “next style”, build your sequence of styles.
Although you’d then have to type parentheses and stuff yourself.
Nothing is perfect.
|
OPCFW_CODE
|
I've seen a few people comment on how OAuth is impossible on the new Apple TV due to the lack of any form of web view. In building Fetch for the Apple TV we needed to interact with an OAuth provider (Put.io) in order to authenticate.
Before I even knew that the Apple TV didn't support a web view, I never in a million years thought about displaying one to authenticate users. It would have been a truly horrendous experience for the user. Instead, I looked to authenticate from a secondary device — an iPhone for example.
The YouTube app on the existing Apple TV does this by redirecting users to a URL and inviting them to enter a code. Here's a crappy picture I found on Google:
I wanted to build something similar for Fetch on tvOS but realised we had two kinds of users: those with Fetch on iOS and those without.
Let's start with those without.
Users Without The iOS App
To create the authentication method we wanted I knew I'd have to build some kind of middleman service but what exactly would that need to do?
- It would need to generate a 1-time URL
- It would need to redirect users to the OAuth provider to authenticate
- It would need to tell the Apple TV the user had authenticated and send it the code
Here's how I imagined it to work:
To achieve this I built a very simple API in Laravel. It generates a random URL (and a token, more on that later) when the Apple TV asks for it. The TV then pings another endpoint waiting for an access token.
The user can visit the URL on their phone or computer and login via OAuth as expected. The provider sends them back to our API and the Apple TV receives its token and logs them in.
One complete, the single-time URL is deleted and cannot be accessed accidentally again.
Users With the iOS App
But what about users with Fetch for iOS? They've already authenticated once and considering we're using the same client ID from our OAuth provider, isn't there a way we can send that over?
I had thought about sending the access token over the local network but Apple didn't include all the multi-peer stuff. We'd already built the middleman API so decided to take advantage of that.
As well as a URL the Apple TV also receives a token when it first pings the server. We use this token to generate a QR code that a reader within our iOS app can scan. The scanning of the code sends it along with the OAuth token to an endpoint on the server. As the Apple TV is already listening for changes, the user can login within seconds. It's actually really, really nice.
Here's a rough idea what it looks like:
And that's it! If enough people think it'll be useful, I'll pop the middleman service I built up on Github. I'd also be interested to see if anyone else has a better approach to this. Comment or tweet me.
UPDATE: I've uploaded the middle man example over on Github
|
OPCFW_CODE
|
We have been offering C assignment help on number of matters. If learners have a short deadline to complete C language can seek our C Programming homework Help. Our programming help in C covers following principles
TopAssignmentExperts is the best Option on your dilemma and in your dilemma. Using a league of devoted finest C++ homework writers, we've been wholly Outfitted with the right persons and sources to assist you together with your homework help. All You must do is to come to us along with your necessity to avail greatest C++ homework writers and let us know your precise requirement to your homework.
Our tutors work hard and provide one hundred% plagiarism free codes to students who require C programming homework help. Also, C programming assignment help is furnished effectively inside the deadline.
Clients can certainly keep an eye on the progress in their assignment. This provides them the assurance that an expert is really writing their assignment. Our customers might also ask for for revisions and modifications and we will gladly do it.
For that reason, to start with we need to determine what is algorithm. We could claim that algorithm is often a finite list of Guidelines. It could help to perform a process in ideal fashion, if each of the Recommendations specified from the algorithm are adopted adequately. There are numerous big requirements that an algorithm requirements to take care of. They are including:
If you are combating C++ assignments, You aren't by itself. Completing precise C++ homework is barely a subject of acquiring the most effective C++ aid – industry experts as part of your area.
The C programming homework solutions furnished by our experts is in effortlessly comprehendible language as well as codes are nicely commented that the students need not go anywhere else for their homework help. The C language provides efficient programmes and may manage every one of the small stage pursuits. The computer programming homework help in C language teaches the students how to jot down productive codes which are totally free from bugs.
C is considered as the most widely utilised programming language as Get More Info a result of subsequent advantages it's;
a) Incorporate a complete row at the tip to sum up the hours, ot, and gross columns b) Add a median row to check my source print out the normal in the hrs, ot, and gross columns two) Both of these optional problems When you have time
Program find documentation: Soon after We have now done the coding segment, our gurus make a documentation that points out the works by using of techniques and courses in order that college students can realize the perform greater.
C language was invented on the Bell Labs to write down an working technique known as UNIX. C programming assignments for beginners features all the basic standard ideas which can be needed to give sturdy foundations to students in C language programming. The help with C programming assignment that we offer is all inclusive and finish in all respects. Our gurus also take the accountability of crafting C programming venture report. These days C is the most popularly used program programming language and diverse softwares has actually been executed using the C programming language.
normal counting semaphores will be employed for vacant and comprehensive, and also a mutex lock, fairly than the usual binary semaphore, is going to be utilized to represent mutex.
Your info will only be made available if we are necessary to do this by regulation. We benefit the privacy of our customers. We also use safe payment procedures which do not expose the credit score details of our purchasers.
Your C++ assignments are tough and time consuming, and you need issue help from experts that realize your preferences, your deadlines, and will be able to fulfill your necessities.
|
OPCFW_CODE
|
The New Mexico Institute of Mining and Technology Board of Regents recently approved a posthumous honorary doctorate in Computer Science and Engineering for a former longtime university employee and distinguished alumnus. New Mexico Tech Department of Computer Science and Engineering faculty members nominated John William Shipman for the honorary degree and the Faculty Senate voted unanimously in favor of conferring the honor at its meeting Feb. 1, 2022. The New Mexico Tech Board of Regents unanimously approved the honorary doctorate degree at its meeting March 11, 2022.
Vice President for Academic Affairs Douglas Wells, Ph.D., presented the regents with testimony from current and former faculty and staff in strong support of the rare academic honor for Shipman. Shipman’s contributions to the New Mexico Tech and Socorro community included supporting the education of students in computer science, as well as in astronomy, ornithology, and music.
“His spectrum of accomplishments are long and wide-ranging,” Dr. Wells said. “He’s been a major contributor to Tech over many decades.”
Shipman, who died Jan. 31, 2017, at age 67 in Socorro, earned a bachelor’s degree from New Mexico Tech in 1971 in Computer Science, one of the first computer science degree holders in the western United States. After working in the computer software industry in the California Bay Area, he returned to New Mexico Tech in 1983 to work as an applications specialist and as a web developer in the Computer Science Department and Tech Computer Center for 19 years. He also worked for nine years at the National Radio Astronomy Observatory (NRAO), located at the observatory’s Pete V. Domenici Science Operations Center on the university’s campus until his retirement in 2013.
Shipman taught courses in software construction, cleanroom software development, operating systems, and practica in programming languages such as Python, LaTex, and Tex. As an applications specialist, he wrote and organized external and internal documentation, built internal applications, taught informal user classes, and engaged in what he called “software technology evangelism.” His Python classes were free and open to the general public. Shipman’s work was published in the form of publicly released software, scientific databases, and technical reference literature. He singlehandedly authored the university’s 800-page computer science tutorial/reference website, a groundbreaking project that is currently being restored by the Computer Science and Engineering Department with the assistance of Information Technology and Communications (ITC) staff.
In addition to his many technical contributions, Shipman was a well-known amateur astronomer, birdwatcher, and performed as a member of the New Mexico Symphony Orchestra Chorus. He volunteered for the National Audubon Society and at local national wildlife refuges participating in bird counts and developed encodings and notations used in ornithological databases. A memorial plaque at the Frank T. Etscorn Campus Observatory at New Mexico Tech was dedicated in Shipman’s honor on Feb. 18, 2019, in recognition of his contributions to amateur astronomy and inspirational legacy to students.
Shipman’s family will receive the honorary doctorate degree on his behalf at New Mexico Tech’s commencement Saturday, May 14, 2022.
|
OPCFW_CODE
|
Hey guys, there are a few things going on I think everyone here should know about.
Yesterday morning my boyfriend and I decided to blow the dust out of our computers, we live in a really dusty place and my computer had been overheating a lot, and since dust is a good insulator I assumed this was the reason why it had been overheating.
So after we had done that, my fan had stopped working. Unfortunately with my computer, you have to take EVERYTHING out of it to even get to the fan, so a million screws and random internal crap later we reached the fan to see a huge dust bunny wedged in the fan preventing it from working properly.
We removed the dust bunny and successfully put everything back together. My computer turned on, and I was very pleased to see it working and see that we hadn't effed it up.
Once I logged on and everything loaded I noticed a little red X over my internet icon. For the last 24 hours I've been out of internet. The good news is my computer no longer overheats or even feels a little hot. It's perfect, but I don't have internet. We have tried everything possible, yes EVERYTHING to get it to work.
When I finally called my dad (he builds and repairs computers) he told me that even the slightest bit of static electricity can zap my wifi card. He told me this is probably what happened when we removed the wifi card and that I may need to purhase a new one.
Unfortunately all I have in my bank account is 20 dollars and my boyfriend and I are unemployed living off of his last 1000 dollars. I'd ask my dad but he is on unemployment and barely making it, and I refuse to ask my mom since she's living in her car.
I guess the new wifi card costs 36 dollars and at the moment we cannot afford it.
My boyfriend said he would share his laptop with me, but without my internet, everything I normally do for Cstyles and my competitions has become a several step process. If I need to make images or anything for my competitions which I was doing daily I need to do it on my computer (my boyfriends laptop is extremely old school and can't handle most of the programs i use), transfer it all to his external harddrive, put it all on his computer and upload everything here.
I'm not sure when I'll have internet again.
I just wanted to make sure everyone knows I'm sharing a computer now, this means I will not be on 24/7 like usual, I will most likely get most of my internet time in at night or midday, (he likes to use his computer in the morning). I will still complete my tasks around cstyles like updating the front page and everything else but until I get internet fixed, I may not be on as much.
|
OPCFW_CODE
|
[MITgcm-support] some questions
menemenlis at jpl.nasa.gov
Wed May 24 10:13:52 EDT 2006
> 1 Does the thsice package include sea ice dynamic section?
> 2 How to link the global ocean to the seaice package?
There are instructions how to do this in:
I am reproducing relevant section below:
cvs co MITgcm_contrib/high_res_cube/README_ice
cvs co MITgcm_contrib/high_res_cube/code-mods
cvs co MITgcm_contrib/high_res_cube/input
cvs co MITgcm_contrib/high_res_cube/results
cvs co MITgcm_code
cvs co MITgcm/verification/global_ocean.cs32x15
\cp ../../../../MITgcm_contrib/high_res_cube/code-mods/* .
\cp ../../../utils/exch2/code-mods/s12t_16x32/* .
\cp ../input/* .
\cp ../../../../MITgcm_contrib/high_res_cube/input/* .
../build/mitgcmuv >& output.txt
comparison output is in:
> 3 How to change the resolution of the global ocean model?
Changing vertical resolution is easy. You change Nr in the SIZE.h header and
tRef, sRef, and delR in the runtime "data" file. You also need to generate new
initial temperature and salinity files, hydrogThetaFile and hydrogSaltFile in
the runtime "data" file. that have the correct size.
Changing horizontal resolution for a lat/long grid is similarly easy but
changing resolution for a cubed-sphere is trickier as you need to generate new
definition files for the grid. What resolution do you want to run at?
also contains an example of a cubed-sphere configuration with approximately
18-km horizontal grid spacing. Some results form integrations at that
resolution are described here http://ecco2.org/ and the input and output from
these integrations is freely available http://ecco2.org/products/
> 4 How to prepare needed input fields?
Input fields can be direct access binary files on an arbitrary lat/long grid,
which is then described in the runtime data.exf file, for example,
Dimitris Menemenlis <menemenlis at jpl.nasa.gov>
Jet Propulsion Lab, California Institute of Technology
MS 300-323, 4800 Oak Grove Dr, Pasadena CA 91109-8099
tel: 818-354-1656; fax: 818-393-6720
More information about the MITgcm-support
|
OPCFW_CODE
|
Are Contingent Values supported on ArcGIS Enterprise Portal? I am aware that Forms (Create forms for attribute editing (Map Viewer)—ArcGIS Online Help | Documentation) must be utilized on ArcGIS Online in order to see the Contingent Values published from ArcGIS Pro. However, I have not been able to recreate the same on my Enterprise Portal.
I am using:
I am currently wondering the same thing and I found the following page with a very interesting note attached to step 7: https://www.esri.com/arcgis-blog/products/js-api-arcgis/developers/contingent-attribute-values-in-th....
It seems like there is currently no support for hosted feature layers but if you reference your data instead of copying it to your portal you might be able to get it to work. Unfortunately I am unable to test this myself because the portal I use is not configured for easy data referencing. Can you let me know if this works for you? I am really curious.
My data is referenced from a registered SQL database and I still cannot get Contingent Values to work on my portal. I am able to get Attribute rules to work but at this point, I am fairly certain that Contingent Values are only supported on ArcGIS Online.
Does anyone have an update to this? I created a layer with contingent values to find out Enterprise doesn't support it. I assumed it had the same functionality of AGOL.
I also found the same article referenced above. Does anyone know how to reference data instead of doing it the usual way? I can 't visualise how to make this work.
Hey there is no update on this. Generally you don't get new features in ArcGIS Enterprise without updating. If your Portal is version 10.9.1 then it won't work. You can try if you have Portal 11.x but don't get your hopes up.
Here is an explanation on enabling registered services in case you want to try anyway but you need to have a high level of control over your ArcGIS Platform as well as the correct rights and you need to know what you are doing. Generally the registering of an datastore is done by the person that manages your GIS platform backend so if you are a normal user you might not be able to do this.
You need to have access to the ArcGIS Server Manager, login there and navigate to site -> Data Stores. There you register a location that is reachable to both your local pc and the ArcGIS Server. These locations are generally network drives that are used by multiple people in the GIS department (that way you only need to register 1 drive to enable multiple people to register their data). Use the validate button to check if the connection is working. Copy your File Geodatabase to this network drive. Open ArcGIS Pro and fix your project so all layers look to the data source on the network drive instead of the one on your local drive.
When you publish you service from ArcGIS Pro you have the option to reference or copy your data, choose the reference option (see attached image).
This was really helpful, thank you!
Having access to the data stores etc isn't feasible for me at the minute but might revisit this. Really appreciate the answer.
|
OPCFW_CODE
|
What is PostgreSQL? How is it different from SQL?
Welcome to this blog post on codedamn. If you’ve ever dabbled in the realm of databases, you’ve probably come across the terms SQL and PostgreSQL. While they both revolve around databases, they serve different roles. In this article, you will learn what SQL is, its historical background, key features, standardization, and implementations. Then, we’ll move on to PostgreSQL, its origins, unique features, and how it differs from SQL.
What is SQL?
SQL, or Structured Query Language, is a domain-specific language designed for managing and manipulating relational databases.
SQL was initially developed at IBM by Donald D. Chamberlin and Raymond F. Boyce in the early 1970s. The main purpose of SQL was to provide an efficient way to access and manipulate data stored in IBM’s original quasi-relational database management system, System R. Over the years, SQL has evolved to become the standard language for relational database management systems.
SQL is primarily known for its CRUD operations — Create, Read, Update, and Delete. But it is much more than just these operations. It supports a wide variety of functions like sorting (
ORDER BY), filtering (
WHERE), and aggregating (
GROUP BY) data. SQL also has powerful joining capabilities to combine records from two or more tables.
The American National Standards Institute (ANSI) and the International Organization for Standardization (ISO) have released standards for SQL. The ANSI SQL standard was first published in 1986 and since then, it has been revised to include new functionalities like XML integration, regular expression matching, and JSON querying.
Various databases implement SQL, each with their own additional features and slight syntax variations. Some well-known databases that use SQL are MySQL, SQLite, and MS SQL Server.
What is PostgreSQL?
PostgreSQL is an open-source relational database management system (RDBMS) that uses and extends the SQL language.
PostgreSQL was initially developed at the University of California, Berkeley, in 1996. It originated as a follow-on project to the earlier Ingres database, which was also developed at Berkeley. Currently, it is maintained by the PostgreSQL Global Development Group, a coalition of many companies and individual contributors.
Among its many features, PostgreSQL is lauded for its ACID (Atomicity, Consistency, Isolation, Durability) properties that guarantee that all database transactions are processed reliably. Furthermore, PostgreSQL is extensible, allowing users to define their own data types, operators, and even write custom code to process data. It supports a variety of built-in data types like JSON, XML, and arrays, which makes it incredibly flexible.
For an in-depth look into PostgreSQL features, you can refer to their official documentation.
PostgreSQL operates on a client-server model where the server manages the database and the client interacts with the server to perform operations. Multiple clients can connect to the server simultaneously, making it well-suited for multi-user environments.
One of PostgreSQL’s unique features is its support for Multi-Version Concurrency Control (MVCC). MVCC allows for multiple versions of a data record to exist at the same time, improving read-write concurrency and overall system performance. This ensures that reads are never blocked by writes and vice versa, resulting in higher throughput and reduced contention.
Popular Use Cases
PostgreSQL is frequently used for web application backends, data warehousing, and Geospatial databases, thanks to its support for complex queries and transactions. Its extensibility makes it ideal for specialized applications like JSON data stores or time-series databases. Startups to large enterprises alike prefer it for its robustness and capabilities.
SQL vs. PostgreSQL: Points of Differentiation
To understand the differences between SQL and PostgreSQL, let’s delve into the characteristics that distinguish them.
Language vs. Database
SQL (Structured Query Language) is a standardized language for querying and manipulating databases. PostgreSQL, on the other hand, is a Database Management System (DBMS) that uses SQL as its query language. Simply put, SQL is the language, and PostgreSQL is the software that utilizes that language.
Standard SQL Support
PostgreSQL is known for its strict adherence to SQL standards, but it also provides several extensions. This includes support for advanced data types like JSON and hstore (key-value store), indexing techniques like GiST (Generalized Search Tree), and various other features that make it more powerful than standard SQL.
When it comes to speed and optimization, PostgreSQL offers several performance-enhancing features like advanced indexing techniques, partitioning, and query optimization based on cost-based algorithms. However, it’s worth noting that performance can be highly situational and dependent on the specific use case and system architecture.
PostgreSQL can scale vertically by adding more powerful hardware resources. Horizontal scalability, although possible, typically involves using additional software solutions and can be more complex to implement.
One of the hallmarks of PostgreSQL is its extensibility. You can create custom data types, operators, and even write your own procedural languages for the system. This allows for highly customized solutions tailored to specific business needs.
Ecosystem and Community
PostgreSQL enjoys robust community support, with a plethora of third-party tools, libraries, and extensions. This leads to faster issue resolution and the rapid development of new features.
Use Cases: When to Use What?
Choose PostgreSQL for complex queries, ACID-compliant transactions, and when you need high extensibility. For simpler, read-heavy workloads, a NoSQL database or lighter SQL databases like SQLite may be more appropriate.
Examples and Code Snippets
Let’s take a practical approach by diving into some examples and code snippets to highlight PostgreSQL’s features.
Basic SQL Queries in PostgreSQL
Here’s a simple
SELECT * FROM employees WHERE department = 'Engineering';
Unique Features in PostgreSQL
One unique feature is the support for JSON data types:
SELECT * FROM table WHERE data->>'key' = 'value';
A common misconception is that PostgreSQL is slow and not suitable for large datasets. In fact, with proper tuning and hardware, it can handle petabytes of data.
Whether you’re a startup or an established enterprise, PostgreSQL offers robust, scalable, and extensible database solutions. Its strong community support and rich feature set make it an excellent choice for a wide range of applications.
Sharing is caring
Did you like what Pranav wrote? Thank them for their work by sharing it on social media.
|
OPCFW_CODE
|
Proofpoint has been studying the distribution of the August malware through a personalized email campaign targeted at retail staff with the purpose of stealing credentials and sensitive documents. In the attack, the cybercriminal group known as TA530 sends false customer queries about duplicate charges or requesting help with orders with information specific to the retailer. First, how does the August malware work? Second, what can people do to spot this type of social engineering email attack when their job is to read and respond to these kinds of emails?
Opening documents from untrusted sources is a risky business process, but it may be necessary in order for organizations to provide good customer support. These files could be screenshots or documents containing transaction details, but there is the chance they may contain malicious macros or fileless malware being delivered as part of a social engineering email attack.
Proofpoint researchers blogged about the August malware that is targeted at customer service staff and management in the retail and manufacturing sector. A social engineering email containing a malicious attachment is sent with embedded macros that first check to see if the system is being monitored, and if not, runs fileless malware with a PowerShell script to download the August malware.
The malware can copy files, extract saved credentials, copy cookies and send configuration data to a remote command-and-control server. It uses an encrypted connection where the encryption key is sent via the browser user agent string. The August malware checks the MaxMind IP database for network information, task counts, task names and recent file counts to see if it is being run in a sandbox or under analysis.
People can be trained to detect a malicious social engineering email or document in the same way they are trained to not open phishing emails, but this could be difficult to do effectively. The more effective mitigation approach would be to instruct customer service staff not to open macros or embedded documents, and to only open attachments when they are specifically mentioned in the email and the customer has indicated the document is part of their troubleshooting process.
Depending on the nature of the customer inquiries, it may be difficult to direct customers to use particular formats for their attachments. Instead, it might make sense to change the business process to avoid social engineering email attacks and the opening of potentially malicious documents rather than implementing more security tools to secure the customer service staff's computers.
A customer support web portal could be used to enable customers to submit data, upload images and convert files into benign file types. For example, a Word document could be converted into a PDF, or a JPG could be converted into a PNG, where the conversion utility strips unnecessary and potentially malicious content.
If it is still necessary to open documents from unknown sources, there are options. Most customer service staff only use specific programs and secure systems to enhance productivity, so given this limited functionality, their computers can be configured to use a sandbox or a virtual machine for any application opening files from untrusted sources. Both options could limit a malicious file's access to just the virtual space or sandbox, and the attacker would need to escape the virtual environment to move laterally on the network.
Learn how to empower employees to protect themselves from social engineering attacks
Read Frank Abagnale's advice to enterprises on fighting back against social engineering
Find out how to locate and remove obfuscated macro malware
|
OPCFW_CODE
|
Broken plastic HollowTech II BB spacers, why?
Yesterday I was installing a BB-MT501 HollowTech II bottom bracket into my ~2006 Trek 520 frame. This frame has a 68mm BB shell.
I was following the directions on Shimano's website: https://si.shimano.com/en/dm/LAFC001/install_remove_bottom_bracket
My process:
Bike frame BB shell was chased and faced by LBS
Cleaned out and dried BB shell
Applied light coat of grease to BB shell threads and to the BB cups
Used a 2.5mm spacer on the non-drive side and 1.8+0.7+2.5mm spacers on the drive side.
Threaded in both cups by hand almost all the way to ensure there is no cross-threading
Used a calibrated high quality torque wrench (Precision Instruments) to torque the non-drive side to 26lb-ft which is on the very low end of the 35-50Nm given by Shimano.
Used the same torque wrench to torque the drive side.
I was able to torque down the non-drive side successfully but when torquing the drive-side, both the 1.8 and 0.7 spacers cracked.
What did I do wrong?
I took everything apart and it doesn't look like the BB shell or the actual bottom bracket was damaged. Thankfully.
What should I do now? I'm thinking it might make sense to have metal spacers instead of plastic but I don't know if those exist.
Thanks
You've probably observed that the drive-side BB is left-threaded. Does your torque wrench work in this non-standard direction of rotation? My Wera Click A can act as a ratchet in both directions, but only limits torque in the clockwise direction
Yes: https://torqwrench.com/3/8-drive-micrometer-click-wrench---m2r100f/
Hello, I've encounter the exact same problem following the exact same procedure, with the same BB. Did you end up doing?
@IsidoreIsou, I ended up getting some metal 2.5mm spacers from a local bike shop. It wasn't easy to find, I called about 8 shops before finding one that had them in stock. The metal spacers installed just fine. I don't know why Shimano makes them out of plastic, seems like a poor choice IMO.
That is a very weird situation because it sounds like you were taking more care than most and did everything right. If the torque wrench were somehow reading way under that could explain it, but it sounds like you were using a good one in good condition so that's doubtful.
In this case I think you should write if off as a defect in the spacers. Metal 2.5mm BB spacers do exist, but there's nothing going on here that says you need something above and beyond the normal plastic ones.
The only possible thing is that it's a 20-100 lb-ft torque wrench so it could have been less accurate at the very low end. It's my only reversible one so that's what I used.
That can affect the accuracy some, but people use arbitrary amounts of torque all the time with the same spacers, and seeing them break is abnormal regardless. I think it's very unlikely to be the tool's fault.
I would suspect the stackup of spacers to be a contributing factor.
That's 5.0mm of spacer but made up of three rings, means each can compress at different rates. If you'd had one spacer it would likely not have cracked as early.
I would try again with a single 5.0mm thick spacer.
Aside - 5mm seems like a lot - are you positive of that measurement ?
Take a look at the install instructions for that BB: https://si.shimano.com/en/dm/LAFC001/install_remove_bottom_bracket. They recommend 2x2.5mm on the drive side. Then there's a note that says "If using a band type front derailleur and a bottom bracket shell having a width of 68 mm, install the three spacers so that there are two on the right and one on the left as shown in the figure. ". The pictures show the 1.8 and 0.7. I'm not 100% sure since this is my first time doing this and it's sort of a custom setup. But I am reasonably confident.
@Criggie Yes, 2 on the DS and 1 on the NDS is the correct setup.
Why do they explicitly say to use the 3 spacers instead of 2 for the band-clamp FD setup? I don't understand what difference it makes but it worries me that using a single 5.0mm would have the same problem.
@Dan -- there are front derailleurs that mount to the bottom bracket. In this case the mount itself acts as a spacer. Google "BB mounted FD". Band-clamp means the FD is mounted on the seat tube, so you need an extra spacer.
@g.kertesz I did not know about the BB-mounted derailleurs, wow. Good to know. As for band-clamp, it seems that whether you have 5mm of spacers from 2x2.5mm or 1x5mm or 1.8+0.7+2.5mm, it shouldn't make a difference to the band clamp FD since the distance will be the same. Shimano's instructions said specifically to use 1.8+0.7+2.5mm for the band-clamp FD, instead of 2x2.5mm. I went with 2x2.5mm metal spacers since that was all that I could find at the local shops. Installed the BB with no issues. Still haven't installed the FD though.
I wouldn't be too worried. As Nathan said, you did everything right. By any chance do you remember what order you had the 3 spacers in? My guess would be that the two thinner spacers were being pulled apart as you torqued the BB down. Try putting some grease under the BB where it contacts the spacers next time--that'll ensure the BB slides over the spacers instead of dragging them along due to friction. Plastic spacers are fine, but I do like to use aluminum ones here. They may be available at your LBS, and they are definitely available for cheap online.
I used the order that the instructions pictured. From the BB shell to the cup face: 1.8, 0.7, 2.5. Grease is a good idea and I'll look for some metal replacements. Thanks.
@Dan Hm, sounds like there was just a lot of friction and the torque ripped the spacers apart. Perhaps the 2.5 was stuck to the BB cup and the 1.8 was stuck to the BB shell, and the poor 0.7 was caught in between. Don't try undertorquing it. It should all work fine even at the full 50Nm.
It's ironic, I tried 35Nm at first because I was worried that the plastic spacers wouldn't be able to handle the full 50. My plan was, assuming the 35Nm wasn't an issue, to split the difference and torque it down to ~43Nm after both sides were first at 35Nm. Is there a reason to go all the way to 50?
@Dan Ah, by undertorquing I meant like deliberately going <35Nm. Really, your BB just needs to be tight (hence the fairly wide range of 35-50). There are some theoretical benefits of leaning towards the higher end (more clamping force --> less chance of movement --> less chance of self-loosening or thread damage), but really anything in that window will work fine. The prerequisite being that your BB spacers stay in one piece of course.
thanks, that makes sense. I'm only seeing 2.5mm aluminum spacers online, is it safe to just use 2x2.5mm spacers rather than the 1.8 and 0.7 ones? My inclination is to just order these spacers since the online store I ordered the BB from is telling me to file a warranty claim with Shimano and I fear that could take ages.
@Dan Yes, 2x 2.5mm is perfectly safe, and I prefer that setup actually. The fewer interfaces between parts, the fewer chances for manufacturing error etc. The warranty is most likely not worth the time and hassle of chasing down a $2 spacer, and yes it will be terribly slow.
|
STACK_EXCHANGE
|
Is thanks a countable noun? Many thanks or much thanks?
A colleague of mine recently wrote in an email "much thanks for your efforts." Does this usage make sense? How does "much thanks" differ from "many thanks"?
This is similar to "Is “Many thanks” a proper usage?"
'Thanks' is generally a non-count noun taking plural agreement. The expression 'many thanks' is a crystallised form, in which trying to decide whether 'thanks' is count or non-count is pointless ('many thanks' is idiomatic, but 'six thanks' is unacceptable).
Much thanks means the same thing as many thanks, but many thanks is the standard form of this phrase, and much thanks is probably merely a corruption of the concept. Basically, thanks is a plural noun (think of each "thank" as an individual expression of gratefulness).
Grammar Girl has a post relating to and pertinently addressing this topic:
..."Which is correct: much thanks or many thanks? I hear much thanks but it just doesn't sound right."
According to the Merriam-Webster Online dictionary, "thanks" is plural, having come from the Middle English singular word "thank." Therefore, "many thanks" is the right phrase because we use "many" with plural count nouns, and we use "much" with mass nouns.
As to popularity, Ngrams, Google searching, COCA, and BNC all concur in that many thanks is vastly more common.
Perusing Google Books led me to a charming passage in a publication of letters to the editor which touches on this topic:
157.--In your article on "Incomparable Wessex, Again," in the May number, I see the expression "Much thanks." Will you kindly tell me if this is correct? Should it not be "Many thanks"?
Undoubtedly it should, though the writer may have had Shakespeare in mind. See "Hamlet," Act I, Scene 1. Still, the expression "Thank you!" or "I thank you!" is always to be preferred to "Many thanks!" or "Thanks!"
so is "much thanks" actually wrong? Since much is used with uncountable nouns and thanks is not an uncountable noun.
Pedantically, it is, but are you going to tell Shakespeare? You could also reason your way out by inferring that the idea of a thank is absurd, and thanks, like love, is a mass moun. Be that as it may, I will neither endorse it nor commit myself to its demise (since the lines are, as demonstrated, a little blurred). Grammar isn't always hard and fast rules.
Language changes. Shakespeare uses all sorts of obsolete expressions. I suspect thanks was used as a mass noun much more then.
In Act I, Scene i of ‘Hamlet’, Francisco says ‘For this relief much thanks.’ That may still be found in modern usage as your colleague has shown, but 'many thanks’ is much more usual. The Corpus of Contemporary English, for example, has 13 records of ‘much thanks’, against 243 of ‘many thanks’.
Yes, but I can't help thinking OP's colleague isn't a native speaker - or if she is, she's probably not what I would call a very competent one. As per your citation, much thanks would have been fine in Elizabethan times, but it sounds really weird to me in the context of emails.
@FumbleFingers: Indeed.
I have nothing to indicate that my colleague is not a native speaker. His name is very American and he speaks very natively, so it surprised me to see him use this form in an email.
|
STACK_EXCHANGE
|
Why do people dislike the resources I've used in the past to get into master's and Ph.D. programs?
I'm an autistic Ph.D student (US, 4.5 years in) who's been active on a fair amount of forums and have received a fair number of negative comments in the past month regarding my academic experience. They are particularly focused on the support I received from my parents due to my neurodivergence so that I could gain admission to graduate programs.
I've had a coach all four years of undergrad, my gap year, during my master's program when I submitted my Ph.D. applications, through my second year of my Ph.D., and on-and-off after that.
I have always had my completed materials in hand before they were reviewed. The official term for that is "copyediting," which is permitted in academic circles to my understanding. All coursework, thesis work, quals work, and my dissertation is all my work. I once hired another copyeditor for my quals, but that was to clean up my writing because my advisor at the time picked that apart a lot.
Here are some examples of responses I've had on forums.
Me: “My parents knew my undergrad grades (3.25 overall, 3.5 major psych GPA
[US system 4.0 = best, 1.0 = pass]) were poor for graduate school so
they hired a coach to help me with fleshing out my personal statement
and how I should phrase emails and communication to old contacts and
others who I'd eventually reach out to as well.”
Response: So these are the kind of resources first generation, low/middle class aspiring PhD students are up against? Jesus Christ.
Some other responses to similar posts:
There's a difference between accommodations and hand holding. Hell, it
sounds like they needed the life coach to get into their graduate
programs instead of just using assistance.
I know so many people who if they were given even half the resources
and “accommodations” that OP got, could be much farther along both in
their personal life and career. How are we accommodating those people
who don’t have the money/parental support to make it to 30 without
developing any life skills? It doesn’t seem right to “accommodate”
someone like OP when all someone else might have needed was a little
bit of financial security and career advice to accomplish more than OP
has.
It’s easy to say “yeah, you deserve all those accommodations and hand
outs”. It’s a lot harder when you have to ask if someone else deserved
it more.
To be blunt, you do not seem to have the qualities that I would
associate with getting a PhD and working independently. Your grades,
lack of direction and the need to use your parents and life coaches
all suggest that you are not likely to do well in any career that
requires a self-starter who can work independently.
I am trying to understand the perspective of the people making these negative comments, and if there is anything beneficial that I could take away from them? One of the comments is regarding my independence. I would like to be more independent but don't know how, is there anything I can take away from these comments to improve myself in that regard?
There may be a salvageable question in here without the venting, but I'm not sure even on that. This isn't to belittle your experiences, but they seem better suited for a professional therapist than a Q&A site.
Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on [meta], or in [chat]. Comments continuing discussion may be removed.
What are the quotes? Where are they from?
Allure and @AzorAhai-him-
Those quotes came from discussions on various academic subreddits with folks who have a verified graduate school and/or background as a professor. I get that website brings out the worst in people, but those who are former graduate students and/or professionals is why I had concern and curiosity. This is also not mentioning that I graduated high school with a class of 8 folks (I went there b/c I had severe depression to the point of suicidality in public school even though I did fine academically) and was on the "wrong side of the achievement gap" in the end.
The only useful thing I think you can take away from this is to be careful about how you think about, and present, your privilege. Almost everyone is challenged in some ways (e.g., neurodivergent) and privileged in others (e.g., having access to parental and societal resources). Reminding yourself that you probably have some form of privilege* (Western, skinny, tall, non-racialized, middle class or above, cis, heterosexual, male, majority religion/culture, English-speaking ...**) is useful, both for your own perspective and when entering discussions on the internet.
That said, some people will be resentful no matter what you say. The fact that there are other people who have had fewer advantages/more, or different, challenges than you doesn't automatically make you a bad (or even undeserving) person.
* people who are truly underprivileged in all categories (e.g., poor females in low-income countries) will probably never even have the opportunity to get into arguments on the internet ...
** These are examples, I'm not necessarily saying that you check any or all of these boxes or that there aren't other important categories (I forgot to include "parents had access to higher education", although that's correlated with wealth/class ...) ...
I'm the downvote-- I don't think we should encourage questions that are this far out of scope with answers.
@user176372 Since the issue of "privilege" is now a major point of discussion at academia in general, maybe it is not quite as out of scope as it otherwise would be. I haven't made up my mind yet on the relevance of the question.
I am trying to understand the perspective of the people making these negative comments, and if there is anything beneficial that I could take away from them?
People resent the fact that you had resources that others lack, whether the "others" are the people making the comments, or someone else.
As Ben Bolker points out, everyone is advantaged and disadvantaged in some way. In your case, your disadvantage is your neurodivergence, and your advantage is having parental resources that helped. For other people, it's the other way around: they are neurotypical, but may have little or no support from parents, or be actively harmed by their upbringing or family.
It is very hard to quantify different combinations of advantages/disadvantages, and to say which of the people in my previous paragraph is "more" advantaged or disadvantaged. But people will nevertheless get riled up over things like this. Especially online.
What you could usefully take away from this is an understanding that some people will resent the additional resources you have to cope with your neurodivergence. Depending on your personal level of neurodivergence, you may have a harder time understanding and getting to grips with this kind of emotion your situation arouses in other people, so it might be useful to actively remind yourself about this fact once in a while.
Unfortunately, there is little you can do here. I would definitely not share information about additional resources with just anyone out there. But people you work with over some time will learn about this at some point (I am not writing "will find out" on purpose, because that implies that you were actively hiding it before). Some will be surprised and will accuse you of hiding this. Finding the right point in time and the right circumstance to discuss a topic like this is tricky... and therefore even harder for people with neurodivergence.
One of the comments is regarding my independence. I would like to be more independent but don't know how, is there anything I can take away from these comments to improve myself in that regard?
This is actually already something very good you took away. People very rightly point out that you need to become independent. A Ph.D. essentially certifies that the holder is academically mature enough to conduct independent research. There is again a fine line here: some people definitely profit from a native language proofreader, others may get statistical consulting, or employ a research assistant for the literature search. What is still "independent" in a researcher, and what crosses over into "paying someone to do scientific work for me, without disclosing this later on" is sometimes not easy to disentangle. Plus, of course, most people collaborate with others, which is yet different. To advance academically, you will need to show that you can both collaborate with others and lead a collaboration.
It sounds like this is something you might be able to work on. More importantly, it looks like you have already recognized this, which is good.
It is usually better for an endeavor like this to use outside help and support. Since this is really about your academic maturity and independence, a "general" therapist/counselor may not be of all that much help, unless they have some understanding of how the academic world works. This is absolutely a point where an academic mentor would be indispensable. Your Ph.D. advisor's role already is to prepare you for academic independence, and ideally, they have been doing so from the very first day.
So I would recommend you have a good talk with your advisor, explain to them the issue, and ask for explicit guidance on how to move forward. After 4.5 years, they likely (hopefully) have some understanding of you and your strengths and weaknesses. They may also be able to suggest other people who you could talk to, and/or put you in contact.
This of course presupposes that you have a good rapport with your advisor, which is helpful in any case, but especially so in your situation. If this is an issue, we have lots of threads here on this. And if you feel that your advisor is not the best person to mentor you here, perhaps you have met other more senior academics out there (e.g., at conferences) that you feel more comfortable approaching.
These commenters saw your post and thought "I wish your life coach was taken away" instead of "I wish I also had access to a life coach."
This is not an attitude you will ever please.
However, it's not wrong to want more independence. If working with a life coach or copy editor helps you, you should absolutely continue to do so. In my book, deciding to hire a life coach or copy editor to help with a task, and then managing that hiring yourself is a valid form of independence. When a neurotypical person decides that they need a haircut and goes to a hair salon, no one calls that dependence. It sounds like you may have already made this transition- you talk about your parents hiring a life coach to help you apply for grad school, and talk about hiring a copyeditor yourself to help with quals.
|
STACK_EXCHANGE
|
how to handle this string in python
I am accepting some binary data from a websocket.
I am trying to do json.loads(data) however I get a ValueError thrown
Printing it I get the following result (which is all valid json):
{"session":"SeFKQ0SfYZqhh6FTCcKZGw==","authenticate":1,"id":1791}
but when I inspected the string further, the print was turning this monstrosity into the json above:
'{\x00"\x00s\x00e\x00s\x00s\x00i\x00o\x00n\x00"\x00:\x00"\x00S\x00e
\x00F\x00K\x00Q\x000\x00S\x00f\x00Y\x00Z\x00q\x00h\x00h\x006\x00F
\x00T\x00C\x00c\x00K\x00Z\x00G\x00w\x00=\x00=\x00"\x00,\x00"\x00a
\x00u\x00t\x00h\x00e\x00n\x00t\x00i\x00c\x00a\x00t\x00e\x00"\x00:
\x001\x00,\x00"\x00t\x00h\x00r\x00e\x00a\x00d\x00_\x00i\x00d\x00"
\x00:\x001\x007\x009\x001\x00}\x00'
What is this coming back and how can I do something meaningful (turning it into a native dictionary via json.loads) with it?
Your data appears to be UTF-16 encoded, little-endian with no BOM (byte-order mark).
I would try first decoding it with the utf16-le decoder:
data = data.decode('utf-16le')
And then load it with json.loads(data).
data = '{\x00"\x00s\x00e\x00s\x00s\x00i\x00o\x00n\x00"\x00:\x00"\x00S\x00e\x00F\x00K\x00Q\x000\x00S\x00f\x00Y\x00Z\x00q\x00h\x00h\x006\x00F\x00T\x00C\x00c\x00K\x00Z\x00G\x00w\x00=\x00=\x00"\x00,\x00"\x00a\x00u\x00t\x00h\x00e\x00n\x00t\x00i\x00c\x00a\x00t\x00e\x00"\x00:\x001\x00,\x00"\x00t\x00h\x00r\x00e\x00a\x00d\x00_\x00i\x00d\x00"\x00:\x001\x007\x009\x001\x00}\x00'
data = data.decode('utf16-le')
print json.loads(data)
Output:
{u'thread_id': 1791, u'session': u'SeFKQ0SfYZqhh6FTCcKZGw==', u'authenticate': 1}
how did you determine the encoding?
Please consider adding more details for the tipu's question
@tipu Experience, mostly. I noticed that every other byte, starting with the second byte in the stream was 00. That meant every character was encoded as two bytes, in little-endian (least significant byte first) order. I also noticed there was no BOM at the beginning. Then I consulted this answer to remind me which decoder was appropriate.
The better question is, Where did your data come from? Did they have some method of indicating that it would be UTF-16 encoded?
@JonathonReinhart i should have looked at what the javascript web socket is sending when it is specified a binary protocol in it's messaging.
|
STACK_EXCHANGE
|
$('#XLabel').parent().html(
'<tr>' +
'<td>' +
'<label for="XLabel" style="font-weight: bold;"> x:</label>' +
'<input style="width:50px;" id="XLabel" name="XLabel"> ' +
'</td>' +
'<td>' +
'<label for="YLabel" style="font-weight: bold;"> y:</label>' +
'<input style="width:50px;" id="YLabel" name="YLabel"> ' +
'</td>' +
'</tr>'
);
//turn inputs into spinners
$('#XLabel').spinner({ step: 10 }).change(function () {
setCursor($('#XLabel').spinner('value'), $('#YLabel').spinner('value'));
});
$('#YLabel').spinner({ step: 10 }).change(function () {
setCursor($('#XLabel').spinner('value'), $('#YLabel').spinner('value'));
});
//attach changed callback to textbox for up and down buttons
$('.ui-spinner-button').click(function () {
$(this).siblings('input').change();
});
var setX = function (x) {
CurX = x;
$("#XLabel").html("<span style='color:#0000FF;'>" + CurX + "</span>");
NewX = (x + 5800) / 20;
//update cursor spot
$("#MySpot").css({
left: NewX
});
}
var setY = function (y) {
CurY = y
$("#YLabel").html("<span style='color:#0000FF;'>" + CurY + "</span>");
NewY = (y + (YOffset * 20)) / 20;
//update cursor spot
$("#MySpot").css({
top: NewY
});
}
var setCursor = function (x, y) {
setX(x);
setY(y);
}
function UpdateXY() {
CurX = (NewX * 20) - 5800;
CurY = (NewY * 20) - (YOffset * 20);
$("#XLabel").spinner('value', CurX);
$("#YLabel").spinner('value', CurY);
}
|
STACK_EDU
|
Dustin “dusty” crumboften referred to as “The Wildman”, is an American reality television star and renowned snake hunter, best known for appearing on the reality series “Swamp People” and “Guardian of the Glades”.
Dusty made his television debut in 2018, appearing in “Swamp Mysteries With Troy Landry”, and from there gained the interest of the producers behind a number of reality shows, and as he continued to appear in popular series, Dusty eventually became a fan-favourite, a beloved function of viewers in the shows.
Although Dusty has only recently embarked on a career in reality television, with years of experience and specialization in his favorite field of hunting, catching and tanning snakes, Crum has already earned a popular reception among viewers, earning him widespread fame.
Viewers and fans of “Swamp People” would be especially familiar with Dusty, as well as his superstitious approach to snake hunting preparation, and his keen familiarity with the Florida Everglades.
While not exactly known for his mischief or outrageous behavior, Dusty recently made headlines in the tabloid media, leaving fans and followers alike in shock and fearing the worst for his future. While Dusty may not have been guilty of his recent tabloid appearance, the unfortunate accident he experienced shocked many of his loyal supporters.
As such, many may wonder how he is doing and if he will still be featured in some of the popular reality shows.
Who is Dusty Crum?
Born May 31, 1980 in Sarasota County, Florida, Dusty spent much of his childhood pursuing the one thing he is good at, which is catching snakes.
While attending Sarasota High School in his hometown, Dusty spent his free time exploring the Everglades, often catching snakes, including pythons, a passion that would lead him to later turn it into a professional career. to make.
Initially, catching snakes was not very profitable, which led Dusty to look for work in many other fields. Dusty’s most prominent position as a young, hard-working man was that of a construction worker. Unfortunately, as a general laborer, Dusty’s duties consisted only of mixing cement and most of the heavy lifting. As a result, the everyday life and work schedule of construction work eventually became far too boring and he started looking for a more adventurous lifestyle.
Snake train rolls to ft myers.. see you tomorrow! Woooooooo!
Posted by Wild man on Friday, November 4, 2022
Thus began Crum’s life as “The Wildman,” and he soon became a noted python catcher and conservationist in the Everglades. In addition to capturing and often moving the snakes, Dusty also developed his skills as a craftsman and sold numerous products made from the materials harvested from the pythons, which would be unlucky enough not to be moved.
While some may think that snake hunting is a cruel profession, the efforts of Dusty and many other snake hunters technically have a positive impact on the Everglades ecosystem, as it maintains populations of native creatures and wildlife.
Often the prey of Dusty and the other snake hunters, the famous Burmese Python is an invasive species brought to Florida by owners and breeders of exotic pet animals. The species’ first appearance dates back to the 1970s, and as a direct result of 1992’s Category Five hurricane, Hurricane Andrew, which devastated Florida on Aug. 23, many of the pet pythons were released or escaped into the wild.
This caused a major spike in the python population in the Everglades, which, while sounding harmless, had a negative effect on the local ecosystem. When mature, Burmese pythons can consume native species such as deer, small mammals, and other rodents, and while they are still growing, these pythons tend to prey on local bird populations.
As a result of this natural disaster, the local area became in need of snake hunters to remove Burmese pythons through hunting or relocation. Usually these pythons are hunted and their products are used for a variety of leather products.
Dusty embarked on an adventurous and undoubtedly dangerous life, then quit his job and began pursuing the life of a snake hunter, focusing mainly on the Everglades. Over the course of his career, Crum eventually became a conservationist, trying to save the endangered ecosystem of the Everglades.
By doing so, Dusty continually raises awareness of the ongoing struggle to recapture natural balance in the Everglades, and apart from actively hunting pythons, he gains public support through the sale of natural products.
Then in 2018 came the nature documentary “Swamp Mysteries with Troy Landry”, a series that often explores remote swamps, clearings and a host of other habitats across the US in search of mysterious and dangerous creatures, called for Dusty to appear as one of the show’s resident experts.
Of course, with many years of experience as a snake hunter and catcher, Dusty left a lasting impression on the production company “Truly Original”, also responsible for creating “Swamp People”.
Although Dusty only appeared in two episodes of “Swamp Mysteries With Troy Landry”, he nevertheless embarked on a career in reality television, as the following year Dusty earned a place in the cast of “Swamp People”, appeared for the first time in the ninth episode. season, and later that same year 2019, Dusty made his debut in the “Guardian of the Glades” ongoing series, which premiered on May 28 of that year.
The series focuses specifically on Dusty, and the calling he took on to try and save the Everglades, documenting his journey as he struggles with the invasive Burmese Pythons.
In addition to his usual appearances in “Swamp People” and his dedicated series, Dusty was also featured in the spin-off series “Swamp People: Serpent Invasion” which is currently airing its third season.
In addition, Dusty also made the cast of the 2020 special “Python scale”, which became an active method of involving the public in Dusty’s efforts to save the Everglades. The special followed 750 contestants as they competed to catch the biggest, heaviest and most pythons.
Throughout his career as a snake catcher and hunter, Dusty worked closely with the Florida Fish and Wildlife Conservation Commission, with whom Crum specifically works to rid the Everglades of the invasive pythons.
He also works for the South Florida Water Management District as a bounty hunter, continuously hunting snakes in the local water resources, and is paid based on the weight and size of each catch.
Over the weekend, Dusty Crum caught the largest python in the program to date at 16 feet, 10 inches. It carried 73 eggs. #SFWMDmtg pic.twitter.com/tmk3zJJF8V
– South Florida Water Management District (@SFWMD) May 11, 2017
Dusty’s unfortunate accident
On November 4, 2021, Dusty was involved in a serious car accident that almost left the reality star and snake hunter without one of his legs. The accident happened as Dusty was driving home as a passenger in a car on Florida’s I-77 freeway.
Dusty and the driver, a close friend of his, were hauling goods from one workshop to another when one of the truck’s tires exploded, initially causing the truck and its trailer to form a fish tail before flipping over and causing three rolled over before coming to a stop. .
During the accident, Dusty’s passenger side window broke, and as he rolled Crum’s leg somehow flew out the window, though he managed to get it back into the cab before the truck finally came to a stop.
As a result of the crash, Dusty suffered serious injuries to his leg and knee, but remained conscious enough to tie a tourniquet around the injured limb, ultimately saving his own life.
As Dusty would later relate, he believed he might die in the accident, but he and his many loyal followers were indeed thankful that his life was spared during the traumatic event. After the accident, Dusty was airlifted to a hospital in Fort Meyers, where he spent several days in the hospital as doctors and surgeons worked to save his leg.
Dusty underwent five surgeries and then underwent physical therapy rehabilitation to regain use of his injured leg.
Just days after the accident, graphic photos made their way into online gossip publications, and Dusty’s family opened a GoFundMe account to raise money for his treatment and the payment of his hospital bills. In total, Dusty’s expenses totaled $20,000, which the reality star was fortunate to be able to afford.
Dusty survived the accident and his leg was not amputated, and as such he was able to continue doing what he loves most, catching snakes.
After the accident, many feared that Dusty might not make it in “Swamp People” and “Guardian of the Glades”, but to everyone’s delight and surprise, Dusty was soon back to normal and herself. Fans of “The Wildman” can see him in the final seasons of the popular History Channel shows – as usual, the Florida native will be in action, catching snakes barefoot and wearing shorts.
|
OPCFW_CODE
|
The customizations that can be done relatively easily are detailed in this section. Before attempting any customization, please read Modifying Template Files below. Basic Customizations are processes that you will likely need to do to make the template look and work as you need it. These are tasks for site administrators, and cannot be done in Contribute. Most of these require relatively little experience with Dreamweaver or Fireworks:
These customizations can be done with other software packages but may not be as simple depending on compatibility and features. Altering library items automatically updates the pages that use them if you work in Dreamweaver. Library items may loose their usefulness in other packages.
Before altering any files, you need to be aware of the purposes behind the included folders. Files in these folders are overwritten when you download updates to the template:
Personalizing the Navigation Bars
Personalizing the Navigation Bar happens in two steps. You can alter the bar by editing the library items provided, editing a copy, creating a new library item from scratch, or detaching the library item in the document you are working with. The above choices allow you to change the contents of the Navigation Bar. When altering this bar, keep the height constraints for this area in mind (18 pixels). You can use plain text links in this area, the P.NavigationBar style will be applied to this text. This helps to insure that the text will fit the vertical constraints of the editable area. Visualy it is best not to mix text and image links.
To create new image links for the navigation bar, see Creating New Buttons.
The provided library items for navigation are Main Navigation.lbi , Sub Navigation.lbi, and Home Link.lbi. These items must be with care since they contain image rollovers.
Navigation Logos.lbi contains links back to larger groups. The version provided includes links for the University and College, but more can be added as needed. The area this library item appears in has a height constraint of 16 pixels. You can use plain text links in this area, the P.LogoBar style will be applied to this text. This helps to insure that the text will fit the vertical constraints of the editable area. Visualy it is best not to mix text and image links and the university is very picky about using the correct font (Universe Condensed Light) with the "NC State University" text.
Creating New Backgrounds
Before you alter the background it is recommended that you move any files you intend to alter from the Interface folder to Media inside of contribute and allow Dreamweaver to update links. You can find the origial images for the background in:
The fireworks source file is kept in the Interface directory:
The background for the top of the page can be any image, but one that is aproximately 250 pixels high and repeats horizontaly is recommended. It should probaby also fade to, or otherwise blend with the background color of the "#main" style which is white (#FFFFFF) by default.
For your convineicne you may want to save a copy of the file you use to create the background image in:
and export it into the folder
Before you alter the buttons it is recommended that you move any files you intend to alter from the Interface folder to Media inside of contribute and allow Dreamweaver to update links.
To create new buttons for your navigation bar and footer, you will need to open the original Fireworks source files. The main site links are in:
the sub navigation images are in:
You can alter the text of any of the exiting buttons, resize the slice that defines that button, and rename the slice to create a new image link for your site. Once you have done this, you can right-click on the slice and export it to create your new button. Save the file in:
You should also save the altered source file to:
You can now add this image to your Navigation Bar. Be sure to provide alt-text for this image to maintain ADA compliance for your site.
Use the Rollover Image, in the Common Components Tab, in Dreamweaver to make your images rollovers. When working with library items, be sure to take the scripts and body tag that Dreamweaver generates out before saving. Dreamweaver will then automatically put the scripts and onload body tag properties in all files that use the library item.
There are still plans to provide a PHP script in a later version of the Template what will allow image buttons to automatically be generated.
Changing the Style Sheet
Altering the style sheet will quickly allow you to redefine text colors, fonts, styles, and sizes. First, in Dreamweaver, move the file(s) in Interface/css to Media/css. Dreamweaver will prompt you to update files in the site. This step is best done before putting your content on the template since the changes will be applied to the templates only.
To edit these files in Dreamweaver, double click on the file or open any document in the site and find the Design Panel. Click on the Edit Styles radio button. Select which style sheet or style you want to edit from the list, and click on the Edit button. It looks like a pencil over text.
The engrbasic_css file contains the definition to rendering all tags and uses CSS1 syntax. This document is used by browsers that are CSS1 and CSS2 aware.
The engrlayout_css file contains the definition for layout with the template and uses CSS2 features. This file is inaccessible by browsers that support CSS1 but not CSS2.
Finally, engrlayout_colortabs_css contains the definition for displaying the "dog-earred page" tables of the template in a variety of colors. It can be taken out of templates that do not need it to save load time. The library items for using these pages are in the Library folder. To use them insert the item in a page and click the Detach From Original button. To change the color of the tab select the table tag and change the class to one of the provided colors. ou will need to manually change the image for the dog-earred corner to match, but all other changes are automatic. The tables will also automaticly "stack" if one appears directly below another giving the illusion of depth.
For more information on altering the style sheet see Advanced Customization
|
OPCFW_CODE
|
Content blocks allow marketers to reuse the same content across multiple campaigns. For example, a block might be a repeatedly used header, footer, or designed call-to-action button.
With content blocks, marketers can:
- Create consistent campaigns using content blocks as headers, footers, or any other asset
- Create pre-defined assets that can be used across multiple campaigns irrespective of the channel and campaign type
- Edit multiple campaigns at the same time by simply updating the content block
Steps to create a content block
1. Navigate to Content >> Content block in the sidebar
2. Click on Create content block in the top right corner.
3. On the create content block page, provide the content block name. This name will be used to add the content block in a campaign, template, or another content block. It can contain alphabets, numbers, and underscores.
4. Add a description that clarifies what the content block is and where it should be used.
5. Add tags (you can use your existing campaign tags also) to add more context to the content block.
6. Select the type as HTML or Text depending upon your use cases. For example- For a coupon code (without styling) you can use a text type, while for an email header/footer you can use an HTML type.
HTML content blocks are supported only in Emails, while text-based content blocks are supported in all campaign channels.
7. Click Publish or Save as a draft.
Using content blocks
You can use your content blocks in two ways -
1. Selecting the desired content block while creating a campaign
- While creating content, press "@".
- A pop-up will open that will have two tabs: Personalisation and Content blocks.
- Navigate to the Content Blocks tab.
Select the desired content block from the drop-down and click on Done to insert the Content Block.
You can also insert only the content of the content block using the following toggle.
This means that only the content will be inserted while detaching itself from the existing content block and any future updates will not be reflected.
2. Directly insert the block label of your content block
a. Copy the Block label from your Content Block page.
b. Paste the Block label into the campaign/template/another content block.
Content Blocks can be added to the campaigns sent using the following channels:
Managing content blocks
You can manage all your content blocks by navigating to Content >> Content blocks
View content block
To preview a content block, click on the three dots at the end of the row in the listing and click on View. You can also click anywhere in the content-block's search listing to view the information about the content block. The View gives a summary of the selected content block - the thumbnail, created by, tags, and the usage.
Update content block
If you want to update a content block, it will update all the campaigns, templates, and other content blocks, where it has been used.
Nested content blocks
You can also nest content blocks. This means that you can use content block A in another content block B.
If there is a cyclic loop, you will not be able to create or update a content block.
Deleting content blocks
A content block can only be deleted, if it is not used in any of the active campaigns.
|
OPCFW_CODE
|
GnuCash page Development has been changed by Jralls
jralls at ceridwen.us
Wed Jul 6 09:54:32 EDT 2016
> On Jul 6, 2016, at 12:31 AM, Chris Good <chris.good at ozemail.com.au> wrote:
>> -----Original Message-----
>> From: John Ralls [mailto:jralls at ceridwen.us]
>> Sent: Wednesday, 6 July 2016 1:06 PM
>> To: Chris Good <chris.good at ozemail.com.au>
>> Cc: gnucash-devel <gnucash-devel at gnucash.org>
>> Subject: Re: GnuCash page Development has been changed by Jralls
>>> On Jul 5, 2016, at 4:42 PM, Chris Good <chris.good at ozemail.com.au>
>>> Hi John,
>>> Re your last change to http://wiki.gnucash.org/wiki/Development :
>>> Github pull requests: This is the preferred method if the change is
>>> non-trivial and there isn't already a bug report on the matter.
>>> The above implies to me that if there IS already a bug report, you'd
>>> prefer a patch. Is that correct?
>> I'm trying to make contributing palatable to all comers.
>> In reality it depends on how complex the patch is. If it's a big change
>> rather review it on Github than in plain text on a bug. OTOH if it's a
> simple fix
>> it makes more sense and is less work to just do a format-patch and upload
>> to BZ, especially if the user doesn't have a github account already. I
>> want to make it seem like that sort of patch is a second-class
>> and I don't want to make that paragraph so laden with if-this-then-that
> that it
>> turns off casual bug-fixers.
>> Can you come up with better wording?
>> John Ralls
> Hi John,
> How about:
> Github pull requests: This is the preferred method if the change is
> non-trivial. Patches are also acceptable.
> Let me know if this is OK and I'll do it. I'm very happy for you to spend
> your time more productively.
> Congratulations on 2.6.13 BTW. I haven't seen any reports of problems -
> Thanks to all developers :-)
That would be redundant. That line is 1 in a numbered list, and 2 is "Attach a patch to a bug report."
I've reworded it to make the bugzilla statement a bullet under the item and changed it from "and there isn't..." to "If there is already a bug report on the matter be sure to include a link to the bug in the pull request and comment on the bug with a link to the pull request."
I think that conveys exactly what we want.
More information about the gnucash-devel
|
OPCFW_CODE
|
Depending on what type of traffic is going over the network, it's often not feasible that an employee brings a wireless router and sets it up into your network. This is because often, they are not or poorly secured and present a backdoor into the network. What can you do to prevent rogue wireless access points being introduced into your network?
Lucas's answer above is a bit of a starting point. There are however two or three other things that must be considered. These end up being somewhat outside the scope of network engineering, but certainly have impacts for network engineering and security so here they go.
You probably want some way of preventing wireless cards in company laptops from being switched into ad hoc mode. Assuming the laptops are running Windows, you probably want to use a GPO to set to infrastructure mode only. For Linux, it is harder to fully restrict, but there are ways to do this too.
Enforcing IPSec is also a good idea, particularly with good key management and trusted enforcement. For example if you can go to X509 certs for key management this can keep unauthorized devices from communicating with the rest of your network directly. Consider key management as a core part of the infrastructure here. If you use a proxy server you may even be able to block unauthorized devices from accessing the internet.
Note the limitations of your efforts. None of these prevents a person from setting up an unsecured wireless access point connected to a USB NIC, for sole purposes of communicating with their computer, especially if the SSID is hidden (i.e. not broadcast).
Not sure how to further contain problems or if further paranoia is well past the point of insufficient returns.....
First of all you need to build a policy prohibiting introducing network equipment into the network which is not owned by or approved by the company IT department. Next enforce port security so that unknown mac addresses cannot connect to your network.
Third set up a separate wireless network under your control (if you give them what they want they are less likely to introduce rogue AP) (if possible and feasible) for accessing the internet with their (mobile) devices. These access points should be secured with PEAP or similar and preferably run on a separate network.
Last you can also do regular security scans using tools like netstumbler to detect and track rogue access points in your network.
There is also the option to perform IPsec over your network so that in case someone does set up a rogue AP, the exposed "waves" will not be plain readable in case of someone sniffing the wireless network.
All of my experience so far has been with Cisco products so that is all I can really speak to.
The WCS controlled APs (lightweight and normal) have the ability to detect and report when non-trusted SSIDs pop up and how many clients are connected to it. If you have heatmaps set up and a decent number of access points you stand a pretty good chance of being able to figure out where the access point is in proximity to your APs. The only down side to this is that if you are in close proximity to any bars/coffee shops/college dorms/neighborhoods expect to see pages worth of "rogue" SSIDs that change as frequently as people move.
The WCS also has the ability to do some switchport tracing and alert you if rogues are plugged into your network. I have yet to have much luck getting this to work. I honestly haven't had a whole lot of time to play with it. By default, at least on my network, there seem to be quite a few false-positives with the way the trace works. Without looking for sure, I believe it only looks at OUI of the MAC and if it matches then you get an alert about a rogue on the network.
Lastly, the WCS also has the ability to contain rouge APs/SSIDs. It does with by using deauth and disassociate messages to any clients that are connected to that AP.
From a monitoring standpoint, you could run a tool like NetDisco to find switchports with more MAC addresses connected than you would expect. It wouldn't automatically prevent a rogue WAP from being introduced to the network, but it would let you find one after the fact.
If the equipment connected to your switchports can be expected to remain static, MAC address limiting (with violations configured to administratively down the switchport) could prevent any rogue device (not just WAPs) from being connected.
Only if the AP is in bridging mode, can you catch it with port security.
Limiting the number of MAC addresses will not help, in the event the rogue AP is also configured as a "Wireless Router".
DHCP snooping is helpful, in that it will catch the Wireless AP, connected in backward, i.e., if the LAN port of the rogue devices that has DHCP enabled is connected to your network, DHCP snooping will drop the traffic.
With minimal budget, DHCP snooping is my only option, I just wait until a user is dumb enough to plug their AP in backwards ... then I go hunting :)
Personally, if the network is for the most part an all Cisco shop, meaning at least your access layer is setup with Cisco switches; I would look at port security and DHCP Snooping as a way to guard against this type of issue. Setting a max of 1 MAC address on all Access ports would be extreme but would ensure that only 1 device could show up on a switchport at a time. I would also set the port to shutdown if more than 1 MAC shows up. If you decide to allow more than 1 MAC, DHCP snooping would help as most consumer grade Wireless Routers introduce DHCP in the local subnet when the end user plugs the device into the switchport. At that point port security would shut the switchport down once DHCP snooping detects that the Access Point is offering DHCP.
Don't forget that you can also run 802.1x on wired ports. 802.1x can prevent unauthorized devices and port-security helps prevent someone from hooking up a switch and tailgating the port. Remember that even with the best network controls in place, you must take measures at the PC level or users will simply be able to run NAT on their PC and bypass your network security measures.
As noted, first and foremost, policies matter. This may seem like an odd starting point, but, from a corporate and legal standpoint, unless you define and distribute the policy, if someone breaks it, there's little you can do. No point in securing the door if, when someone breaks in, you can't do anything to stop them.
What about 802.11X. You don't really care about what the access point is, legal or not, so long as no one gets access to the resources below it. If you can get the access point or the user beyond it, to support 802.11X, without approval, they get access, but they can't do anything.
We actually find this useful as we assign different VLANs based on it. If you are approved, you get access to the corporate VLAN, otherwise, it's the in-build ad network. Want to watch our promo videos all day, we're OK with that.
Nessus has a plugin for detecting rogue APs - you could script a scan to look periodically.
Prevention is hard.
You could replace wired Ethernet ports by using WiFi for all devices - eliminating need for people to setup their own APs. 802.1X to authenticate and IPsec to secure connection.
Detection may be only reliable way:
Wireless links have high packet loss and probably significant delay variation. By monitoring packet loss and delay you can detect connections over rouge access points.
Have you give any thought to overlay wireless intrusion prevention systems (WIPS)?
Rogue APs come in many forms shapes and sizes (ranging from usb/soft AP to actual physical Rogue APs). You need a system can monitors both the air and wired side and correlated the information from both sides of the network to deduce if a threat is actually present. It should be able to comb thru 100s of Ap and find the one that is plugged into your network
Rogue Ap is just 1 kind of wifi threat, how about your wifi clients connecting to external APs visible from your office. Wired IDS/IPS system can not fully protect against these kind of threats.
|
OPCFW_CODE
|
love it. i would like to see upgrades for the weapons and upgrades for the character. Also the reloading since u made all the weapons reload bullets at the same rate (im guessing to simulate putting shells into the clip) i would like to be able to stop reloading mid reload and shoot. im a huge fan of gore and when i think of zombie games thats a big part of it for me so i would like to see alot more gore. Love the style of graphics u chose and overall i like it please make another (doesnt have to be zombies XD)
Good job man your games are getting better :)
Sometimes it takes less art fancy and a simple approach :) great game mate
Pretty good, but nothing new
You did a fairly good job on this game; the gameplay is solid and I did not encounter any bugs or glitches (not even the locked movement bug that can occur in flash games), and I played up through wave 23.
The graphics are decent, and the weapon selection is nice. What truly detracts from this game however, is the complete lack of originality. There are countless other games of this type-- viz., isometric shooters in which the player has to fend off increasingly numerous waves of enemies, and in which the best strategy is simply to lead the enemy in circles whilst pouring ordnance into the crowds that gather.
There isn't really anything that makes your game stand out from all the other similar games, besides the fluid, lag-free gameplay and well-balanced weapons (and these things should be standard, not exceptional to all such games).
The only other negative point about your game was the speed of the enemies. Whilst the player can deal with this by simply running backward in a more or less circular course around the level, it doesn't make sense for the enemies to all be that fast, given that you called your game 'Epic Zombie Killer.' Your game doesn't capture the feeling of fending off hordes of zombies (as portrayed in countless other games and many films). Zombies are disturbing not because of their speed, but because of their resistance to damage, persistence, and numbers.
In short, it would make more sense to have slower, but tougher enemies that appear in larger hordes, perhaps with an occasional fast one mixed in. As you currently have it, the enemies aren't exactly what comes to mind for most people when they think of the word 'zombie.' Overall a good game, but rather lacking in replay value.
Good game but the spawn points can be way to close to the character, the boss spawned right on top of me and I died instantly.
thats all I could say without breaking my concentration xD got all guns
kill all zombies in my sight and wasted about 20-30 minutes
Good....no GREAT game
but agreed, a little challenging, I almost dies about 10 times
and it could use more weapons, but the selection was great
maybe a more challenging mode, say melee weapons?
|
OPCFW_CODE
|
Why is AWS Athena Binary Format for Geospatial Data Different From PostGIS?
I am trying to convert some code from Postgres/PostGIS to AWS Athena. The existing code uses WKB for representing geospatial data, so I want to keep using that. However, AWS Athena appears to have a different binary format than Postgres/PostGIS:
Postgres/PostGIS: SELECT ENCODE(ST_POINT(-82.9988, 39.9612), 'hex') returns<PHONE_NUMBER>abcfd556ecbf54c02575029a08fb4340, which is expected WKB.
AWS Athena: SELECT TO_HEX(ST_POINT(-82.9988, 39.9612)) returns<PHONE_NUMBER>01000000ABCFD556ECBF54C02575029A08FB4340, which is identical except for four leading zero bytes.
What are these four leading zero bytes? Is AWS Athena's binary representation of geospatial data just different i.e. I should just prepend four zero bytes before insertion? Or is there something that I'm missing?
It seems that the leading four bytes are the SRID. I now need some mechanism to convert my data to include this I suppose.
FWIW, this no longer seems to be the case:
SELECT TO_HEX(ST_POINT(-82.9988, 39.9612)) no longer works at all, and
SELECT TO_HEX(ST_ASBINARY(ST_POINT(-82.9988, 39.9612))) returns
0101000000ABCFD556ECBF54C02575029A08FB4340
I would go the "safe" way and convert the column to WKT via ST_AsText() in Postgres, then transfer it to Athena and then convert it back to geometry via ST_GEOMETRY_FROM_TEXT.
Not technically an answer to the question, but that's what I ended up doing.
I've had issues in the past with invalid geometries arising out of a round trip from WKB -> WKT -> WKB
Have you tried ST_AsHexEWKB()?
SELECT ST_AsHexEWKB(ST_SetSRID(ST_POINT(-82.9988, 39.9612), 4326));
st_ashexewkb
----------------------------------------------------
<PHONE_NUMBER>E6100000ABCFD556ECBF54C02575029A08FB4340
Unfortunately, ST_AsHexEWKB() is not a registered function for now on AWS Athena.
|
STACK_EXCHANGE
|
- Room for expansion.
- Quality TV tuner.
- Desktop form factor.
- Lacks wireless keyboard/mouse.
- Subpar speakers, sound, and graphics.
The new Gateway 832GM Media Center PC ($999.99 list) provides an affordable starting point for consumers looking to update their home entertainment center. Though the midsize tower case may not blend in with existing A/V components, the use of BTX technology keeps the 832GM running cool and quiet, a must for any Media Center system.
The 832GM is housed in the same black and silver chassis as the Gateway 9310 series of high-end desktop PCs, and it uses the new BTX (Balanced Technology Extended) design for near-silent operation. Unlike the
The system has two FireWire ports and a USB port on the front of the system, and there are another six USB ports around back, along with a Gigabit Ethernet port and jacks for Intel's HD sound controller. Although the integrated audio solution is adequate for smaller rooms, a more robust sound card, such as Creative's Audigy2 ZS, may be a better choice if you're serious about the quality of your audio output. The system ships with a pair of desktop speakers that are woefully underpowered for a Media Center, so unless you're connecting to an existing sound system, you'll want to invest in a set of multichannel speakers.
The 832GM is offered as is—meaning you're on your own if you want a beefier configuration—but the tool-free chassis makes it easy to replace or upgrade internal components. The possibilities for upgrading the system are extensive, with two available PCI slots (one is 16X PCIe), two open hard drive bays, and two vacant memory slots. With a 3.0-GHz Pentium 4 630 processor running on an Intel 915GSE motherboard, the 832GM has plenty of muscle to run Media Center and home productivity applications, and it's capable of 64-bit computing, thanks to Intel's EM64T technology. Video comes by way of Intel's GMA 900 controller, which is fine for regular video, but garnered unimpressive 3D benchmark-test results. Gamers may want to use the open 16X PCIe slot to add a high-end 3D card.
The system comes with 1GB of DDR memory (expandable to 4GB) and a good-size 250GB SATA hard drive (7,200 rpm) for storing recorded TV programs, music, and other digital content. The multicard memory card reader and dual optical drives (a dual-layer/multiformat DVD burner and a CD-ROM drive), cover the gamut of removable and recordable media.
We were pleased by the clarity of the Hauppauge WinTV TV-tuner card, which also has an FM tuner and an IR blaster with a remote control. On the other hand, we were disappointed with the PS/2 keyboard and ball mouse; an optical mouse would be nice, but a wireless setup would be even better. In addition to the TV recording capabilities and other multimedia features included with Microsoft Windows XP Media Center 2005, the 832GM comes with Microsoft Works 8 and Money 2005, Cyberlink PowerDVD, and Nero 6 burning software.
The Gateway 832GM is worth a look if you're seeking an affordable, entry-level Media Center system, as long as you don't mind it coming in the shape of a desktop. More advanced users may be satisfied if they take advantage of the systems upgrade potential, or look to the more sophisticated, albeit expensive,
View the Gateway 832GM, WinBook PowerSpec MCE410, and HP Digital Entertainment Center z545
PC Magazine uses the same tests and the same scale when rating the multimedia—Music, Photos, Video, Gaming—on desktops and notebooks. We do this so it is easy to compare consumer notebooks against consumer desktops in addition to comparisons within each category. As a guide: the best desktops will score above a 90 on a given subject, and the best notebooks will score above a 70. The reason for this is even the most advanced notebook will have to give up some capabilities, when compared to a desktop, for portability: notably size and weight compromises that affect hard drive space and screen size, as well as power compromises that affect CPU and graphics performance.
- Subratings (out of 100):
More Media Center reviews:
|
OPCFW_CODE
|
Need to write test cases which already has template
16 freelance font une offre moyenne de $155 pour ce travail
Hello! We're a full-cycle team of 30+ web developers. Having the required skills, we will be glad to help you with your project. We have some questions for you to clear up before we start. Please message us, so we Plus
Hi There I've excellent programming and development skills and knowledge. I can provide an efficient, perfect, well documented development of your Programming and Computer project according to 100% accuracy and requ Plus
✮✮✮ CLEANLINESS IS NEXT TO GODLINESS ✮✮✮ Dear, Nice to meet you. Reading your job description, I understood what you want in your project. I am proud of my Top skill, 8+ years of experience, 100% satisfaction rate i Plus
Hi Dear I read your requirements carefully. I have rich experience with Python Django, Flask, selenium, Web Scrapy, software architecture and so on. You can see my good reviews for python projects. Please touch me. I w Plus
Hi I am software engineer and have done many java and programming projects. You can share more details with me so that we can negotiate the price accordingly. Thank you
Hi, I'm an expert in writing test cases. I'm sure that I can easily do this project for you. We can have a chat about it. Thanks.
Hello dear! I am a full-stack developer with excellent expertise in both front-end (AngularJS/ReactJS/Vue) and back-end (Laravel/Node.js/CodeIgniter,YII,CakePhp/WORDPRESS) programming. I pay special attention to the st Plus
Hi, My name is Alexander. Thanks for your job posting. I have read your descriptions carefully and understood clearly what you want. I have 5+ years of experience in Web Development. Wish you could not lose an amazing Plus
If i got this job, i will do it as soon as possible. I will send it to the employer based on the dateline given
Hi Dear. I am a sql script expert. I am ready to start now. Lets discuss more details in chat. looking forward to hearing from you. Best regards.
I am 9 years of experience in testing. I have expertise in writing test cases and test scenarios and have strong SQL Skills. I have strong analytical and problem solving skills.
Im an IT Engineer with work experience in SQL. the Scripts of Sql will be tested and working ! After every TestCase , it will be showcased to you for betterment !
I have professional Software Testing experience in a multi-national company with clients from the US. Please read my profile for more details and you will definitely find me an appropriate candidate for this job.
|
OPCFW_CODE
|
Introduce DOM element finders
Summary
Introduce findOne and findMany for searching plain DOM elements by JQuery selector.
Deprecate existing findElement/findElementWithAssert in v1.
Motivation
Today we have the following methods to find in the DOM:
function findElement(pageObject: PageObjectNode, scope?: string, options?: FindOptions): JQuery;
function findElementWithAssert(pageObject: PageObjectNode, scope?: string, options?: FindOptions): JQuery;
both of them return jQuery element collections which should be eliminated to reduce leakage of jQuery to the end user.
Also though they return collection, a collection contains a single Element by default. You can disable this behavior by passing an additional multiple option set to true. But here it starts conflicting with a finder name, cause with multiple we look for many elements with a findElement.
Detailed design
/**
* Looks for a single element by JQuery selector
*
* @throws If no elements found
* @throws If more than one element found
*/
function findOne(pageObject: PageObjectNode, scope?: string, options?: FindOptions): Element;
/**
* Looks for elements matching by JQuery selector
*/
function findMany(pageObject: PageObjectNode, scope?: string, options?: FindOptions): Array<Element>;
Neither of these methods need a multiple flag, cause it's pretty straighforward which method to use when you need elements list. Also, If you just need to check if the element exists without throwing, you can check elements count returned by findMany.
And probably the most important, we don't return a JQuery collection anymore. Note that we still allow jquery-like selectors, this is necessary to make collection work. Anyway, with the new finders we should be able to switch to a different search implementations compatible with jquery-like selectors.
How we teach this
update docs
add a codemod to ease migration - ember-page-object-codemod
Drawbacks
It's a breaking change. However if we ever want to get rid of JQuery dependency we should avoid exposing its instances to the user, and that would be a breaking change anyways.
Alternatives
open to ideas about a better names for new finder methods
can we reduce number of args by putting scope to the options, so new interface gets:
// when no options passed
function findOne(pageObject: PageObjectNode, scope?: string): Element;
// scope is passed as a part of `options`
function findOne(pageObject: PageObjectNode, options?: FindOptions): Element;
@ro0gr in which cases users will need to uses this function? I'm using ember-cli-page object in my project and haven't faced it yet. Though I used executionContext.findWithAssert method in my custom properties:
export function powerSelect(selector) {
return {
isDescriptor: true,
get() {
return function(text) {
const executionContext = getExecutionContext(this);
return executionContext.runAsync(() => {
const element = executionContext.findWithAssert(selector, {}).get(0);
return selectChoose(element, text);
});
};
}
};
}
findOne might be confusing, since it's the same name as in new collection function, but with diffrent args.
in which cases users will need to uses this function?
My use cases are the following:
export default {
...
// inline getter
opacity: getter(function(this: any) {
const el = findElementWithAssert(this).get(0);
return Number(getComputedStyle(el).opacity);
}),
// kind of action
scrollToBottom() {
const el = findElementWithAssert(this).get(0);
el.scrollTop = el.scrollHeight;
return this._triggerScroll();
},
_triggerScroll: triggerable('wheel')
}
or define custom props:
export default function hasFocus(scope?: string, options: object = {}) {
return {
isDescriptor: true,
get(key: string) {
const opts = Object.assign({ pageObjectKey: key }, options)
const el = findElementWithAssert(this, scope, opts).get(0);
return document.activeElement === el;
}
}
}
Though I used executionContext.findWithAssert
executionContext private and for a good reason. There are many unnecessary internals exposed by this class. Using it in user land will complicate migration path for such apps. Currently, I'm working on extracting only the necessary bits from the current execution contexts to a new thing called adapters, hopefully I'll come up with an issue describing the plan for that this weekend.
Talking specifically about executionContext.findWithAssert..
In fact findWithAssert and find duplicate most of the logic between execution contexts. As far as I remember the only different thing for them is detecting a contextElement(second argument) of $ query:
https://github.com/san650/ember-cli-page-object/blob/c37d364d7d0a6a075fa4f6b384011fe7a4946717/addon-test-support/-private/execution_context/rfc268.js#L109
https://github.com/san650/ember-cli-page-object/blob/c37d364d7d0a6a075fa4f6b384011fe7a4946717/addon-test-support/-private/execution_context/integration.js#L60
https://github.com/san650/ember-cli-page-object/blob/c37d364d7d0a6a075fa4f6b384011fe7a4946717/addon-test-support/-private/execution_context/acceptance.js#L120
In my opinion this should have been changed, so only contextElement is exposed from the execution context/adapter and the rest of the find logic is extracted to the finder method.
findOne might be confusing, since it's the same name as in new collection function, but with diffrent args.
yeah, I know what you mean. That's why I'm asking about ideas for better names in the alternatives sections.
By the way, it's not only about different args. They are used in different contexts. A new proposed findOne, findMany are standalone functions which has nothing to do with collections, and accept a page object instance as a first argument. I mean this naming collision is probably not such a bad thing, they are even used in different layer of the test suite:
Collection.findOne mostly used in tests itself
findOne(pageObjectInstance, ...) mostly in definitions
@ro0gr got it. I use executionContext because runAsync is not exposed.
So this task will start 2.0 branch?
@yratanov I'd like to start v2 with only removal of deprecated apis.
The idea here is to:
Provide new finders
Switch to them internally in a backward compatible way, so consumers don't feel the difference. Seems like this should come with a minor version bump.
Suggest new finders in docs + deprecate findElement(WithAssert). minor version bump again, this would probably be the last release of 1.x.
a bit out of scope here, but a similar strategy I'd like to apply for adapters, so when v2 starts we just remove all the legacy finders and all the execution contexts.
does it make sense for you?
@ro0gr in some cases you need to find element without assert. For example in isHidden property you want to count "element not found" as a valid result. Consumers would also like to have this ability in their custom properties. So I think asserting should be implicit. May be as a part of options hash, like findOne(page, selector, { assert: true }).
PS. It also made me think that we shouldn't have added asserts in #461, as there might also be reasonable use cases, like:
assert.equal(collection.findOne('text', 'Lorem'), null, 'No Lorem in the list')
What do you think?
To check if element exist, it's meant to use findMany:
const elementExists = findMany(this, scope, options).length === 1;
same with collections case, you can use filter there if you don't need stirct find.
|
GITHUB_ARCHIVE
|
WIP: Harmonize calls to EVP_Digest*{Init,Update,Final}()
Commit 2 was motivated by @Bugcheckers remark https://github.com/openssl/openssl/pull/6819#issuecomment-410296964. The other two commits are by-products.
Commit 1
EVP_DigestInit.pod: add missing entries to RETURN VALUES section
Commit 2
Harmonize calls to EVP_Digest*Update()
The majority of callers does not check for an empty buffer (buffer_length == 0) before calling
EVP_Digest*Update(), because all implementations handle this case correctly as no-op.
(See discussions in #6800, #6817, and #6838.)
Commit 3
Noticed while working on commit 2. Can be squashed with commit 2 if desired, or discarded if you think it is consistency overkill.
Harmonize calls to EVP_Digest*{Init,Update,Final}()
According to the manual pages, the functions return a boolean value,
so checking for `EVP_Digest*() <= 0` is misleading. Also the
statistics show a clear tendency towards using `!EVP_*()`
~/src/openssl$ grep -rn 'EVP_Digest\(Sign\)\?\(Init\|Update\|Final\).*<=' | wc -l
77
~/src/openssl$ grep -rn '!EVP_Digest\(Sign\)\?\(Init\|Update\|Final\)' | wc -l
245
At least this test fails:
not ok 1 - ../../../_srcdist/test/recipes/30-test_evp_data/evppkey.txt
not ok 1 - run_file_tests
../../util/shlib_wrap.sh ../../test/evp_test ../../../_srcdist/test/recipes/30-test_evp_data/evppkey.txt => 1
not ok 7 - running evp_test evppkey.txt
Thanks Richard! The latest fixup seems heal most of the evp_test, except for the SM2 tests, which I will have to look at later. Strangely, the test outcome is 'ok' although the SM2 tests report a mismatch. Is this a bug in the test?
# Starting "SM2 tests" tests at line 18402
# ERROR: (memory) 'expected->output == got' failed @ test/evp_test.c:1111
# --- expected->output
# +++ got
# 0000:-3045022100f11bf3 6e75bb304f094fb4 2a4ca22377d0cc76 8637c5011cd59fb9
# 0000:+3045022061fa912d d72fff1aebf4857a 77189bdf25c2ae63 fe6d530e7878d89d
# ^^^^^^^^^^ ^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^
# 0020:-ed4b130c98022035 545ffe2c2efb3abe e4fee661468946d8 86004fae8ea53115
# 0020:+570505b602210099 be0398f934a02cc6 d47cf774e44294ad b35c6741ad81693c
# ^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^
# 0040:-93e48f7fe21b91
# 0040:+a14caef35d5af2
# ^^^^^^^^^^^^^^
#
# INFO: @ test/testutil/stanza.c:121
# Starting "Chosen Wycheproof vectors" tests at line 18440
# INFO: @ test/testutil/stanza.c:33
# Completed 1442 tests with 0 errors and 0 skipped
ok 1 - test/recipes/30-test_evp_data/evppkey.txt
ok 1 - run_file_tests
I created issue #6857 for the test problem.
Since @romen and @kroeckx pointed out to me that the SM2 test was not really failing, I think this pull request is ready for review now. I rebased, squashed the fixup and reordered the commits but did not make any additional source code changes. Only a little rewording of the commit messages.
ping
ping @openssl/omc for a second review.
Speaking of harmonization, have you read doc/man3/EVP_DigestSignInit.pod? Take an extra look at the "RETURN VALUES" section. That would need an update, no?
Hmmm, ok... thanks for pointing out that inconsistency with the documentation. I will have to check the sources first whether the documentation is correct and EVP_DigestSign*() and EVP_Digest*() indeed return different kind of values. The EVP_DigestSign*() call statistics is undecided; either way, approximately half of the calls could be harmonized.
EVP_DigestSign
~/src/openssl$ grep -rn 'EVP_DigestSign\(Init\|Update\|Final\).*<=' | wc -l
26
~/src/openssl$ grep -rn '!EVP_DigestSign\(Init\|Update\|Final\)' | wc -l
20
EVP_Digest
~/src/openssl$ grep -rn 'EVP_Digest\(Init\|Update\|Final\).*<=' | wc -l
51
~/src/openssl$ grep -rn '!EVP_Digest\(Init\|Update\|Final\)' | wc -l
225
This pull request has aged and I currently have no plans to resume it.
|
GITHUB_ARCHIVE
|
The above video goes away if you are a member and logged in, so log in now!
Thanks, I got it fixed by deleting the config file, but I got another issue. I set up a SMB share but it only detects my "SharedDocs" folder, not the one I set up for network sharing. The folder I did set up for sharing works fine on my laptop, but not SMS. Any suggestions?
In SMS 2.8rev2, when we start to play an avi file and it have already played before, it asks if we want to continue from the point it stopped, right ?
But if we use a remote control, the triangle button does not work. The cross works. The triangle works only with the PS2 control.
Other problems with Samba Server
Other problems with Samba Server –
Last edited by ps2novice; 06-07-2008 at 08:47 PM.
Reason: add attachment
DVD reading mechanism?
DVD reading mechanism? –
Since my PS2 is not modded I made my own SMS boot disc combined with the memento patch for both booting straight to SMS and the PS2 reading the disc.
The problem is that SMS only shows VIDEO_TS and AUDIO_TS folders implanted by the memento patch to trick the PS2, meaning its reading the UDF filesystem instead of ISO9660 filesystem with the elf and the video files!
Is there any option to load the ISO filesystem instead of the dummy UDF one?
It seems like a trivial fix but since I'm not the author I suppose I can't judge that.
Why don't you just start SMS from the MemCard and use this tutorial to create a readable DVD?
best regards Jones23
I'm very new to SMS (Just installed on the the PS2 on the weekend) and wanted to say: Niiiiiiiice ;-)
Thankyou to the Developer(s) for such a nice polished product, I especially like being able to run it in 1080i on my big HD Telly.
I have three issues, which may already have been raised in here, but with 50 pages of thread going back over two years I can't search through it and my forum searches have come up blank.
1) When I use the SMB connection to access files on my Linux server, and I click on a file it doesn't know (e.g. a WMV) SMS just hangs on "Detecting File Format" - I have to Power off & reboot
2) When opening an HD Clip (DIVX5 1280x720 3226kbps, 6Ch AC3 48000Hz 384kbps) It hangs at the end of "Loading indices.." - Again I have to power off & reboot.
3) The SMB connections don't work if you have autostart network enabled.
Other than this everything seems to work great and I'm very happy with it.
Many thanks in advance for any help you can provide.
DMS 4.0 Pro
I have, and I'm happily on my third wasted DVD (they all have issues no matter how many variants of the procedure I try) while I have the feeling that the one I memento patched might work if SMS would just skip the UDF partition and go to the ISO one (the disc DOES boot after all, while I've made 1 unreadable disc and one with filenames screwed up...I'd rather just make a data iso and memento patch it...)
Originally Posted by Jones23
EDIT: Actually I just made one and it works, but the PC saw messed up filenames but SMS sees them correctly, although I still insist skipping the UDF partition should be made a button or option or something...
I downloaded a CD image that already had uLaunchELF on it.
Then I used uLaunchELF to put SMS on my memory card. So easy!
Depends on how your PS2 chip can boot stuff, though.
The Maximum Resolution is 1024
|
OPCFW_CODE
|
Get live news, updates, releases, trends, social networks about the cryptocurrency Dentacoin (DCN).
Dentistry Enters Blockchain via Dentacoin | Blockchain
More than 28 million people use GitHub to discover, fork, and contribute to over 85 million projects.
Dentacoin (DCN) Trading 13.3% Higher Over Last Week
Dentacoin (DCN) - Crypto Asset - CryptoScreener.com
Dentacoin: ICO - Most important facts of this Initial Coin
ICOrating Dentacoin basic Review ( h ttps://dentacoin.comDentaCoin is the first blockchain platform designed for the global.
Dentacoin(DCN)'s ICO details - CoinJinja, All About
Dentacoin Price Prediction 2018 to 2020 & Future ValueIco review, whitepaper, token price, start and end dates, exchanges, team, and financial data.Dentacoin released partner clinics but they could be backtracked to.
T H E C A T A L Y S T P R O J E C T R E V I E W About project - Name: Dentacoin - Ticker symbol: DCN - Project type: An Ethereum-based Token.Dentacoin token is already accepted as means of payment in our.
Dentacoin - All information about ico Dentacoin ICO (Token
Creating a Dental Industry Community by Rewarding People
ERC20-token/DCN TimeLock addresses.pdf at masterTrusted reviews are rewarded with a higher amount of Dentacoin (DCN).
The Dentacoin (DCN) and Bitclave (CAT) tokens are now available in Jaxx.Unbiased and live Dentacoin (DCN) token and coin information side by side.
Upcoming Event Listing on Coinbit for Dentacoin (DCN
DCN Dentacoin details about team, social media, community
Interview with Dentacoin: The Blockchain SolutionDentacoin is the first Blockchain technology concept specifically designed for the.The Dentacoin ERC20 token is configured to be used globally by all individuals.
Dentacoin Weekly Updates: 1-8 Dec 2017 Dear supporters, Driven by the willingness to engage our community even more, from today on we will keep you updated with our.Through these numerous blockchain-based tools patients and dentists will be rewarded with Dentacoin (DCN). reddit.Dentacoin (DCN) is a new Ethereum-based token, customized for the Global Dental Industry.Some cryptocurrencies have aimed at much higher prospects than walking in the footsteps of Bitcoin.
Dentacoin - Price, Wallets & Where To Buy in 2018
DentaCoin: blockchain platform for dental health will have its ICO on. dentacoin ( DCN),.Dentacoin (DCN) was introduced to the cryptocurrency market in August 2017 and has since then climbed to the 57th position on the crypto.Dentacoin is the first Blockchain concept designed for the Global Dental Industry.
GitHub - Dentacoin/dentacoin.github.io
Dentacoin (DCN) - CoinMath
A custom-made ERC-20 standard Ethereum token called Dentacoin (DCN).
Dentacoin (DCN) Market Capitalization Achieves $123.61
Dentacoin [DCN] - Altcoin.io ExchangeDentaCoin has created a payment system with its new coin DCN token,. reddit.
Dentacoin Foundation says that its Dentacoin platform is focused on. paid in DCN while simultaneously guaranteeing stable.Dentacoin is the New Ethereum-Based Token, specifically designed for the global Dental Industry.
|
OPCFW_CODE
|
Showdown: Office 365 E3 vs. Microsoft 365 Business
Recently I wrote a few articles on the various subscriptions out there, and how confusing everything is getting as the 365 universe expands and morphs. Since that time, I have also been playing more and more with the Microsoft 365 Business subscription (as opposed to Office 365 Business). As previously explained, Microsoft 365 subscriptions are a relatively new beast, and they are essentially a “bundle of bundles”–they include Office 365 as well as some additional products for extra security, and even Windows 10 “subscription” licensing.
Generally, when you are comparing these SKU’s, you would look at “apples to apples” as much as possible–e.g. compare Office 365 E3 to E5, or to Office 365 Business Premium. That way you can see what is being “added” by being on the Enterprise vs. the Business track, or, by moving from E3 to E5 for example. But today I want to make a detailed comparison between two SKU’s from different universes.
The new Microsoft 365 Business is a compelling competitor, in some ways, to Office 365 E3, even though one is from the “Enterprise” track while the other is from the “Business” side (Microsoft targets the Business product line to small and mid-sized organizations with less than 300 total users). Further, the one product is strictly an Office 365 product, while the other is Microsoft 365, including Office 365, some security/compliance enhancements, and Windows 10 licensing.
But it is hard not to invite the comparison, because both of these SKU’s are priced at $20.00/user/month (USD), and contain so many similar features. Both include Exchange archiving, email encryption via Azure Information Protection, Data Loss Prevention (DLP) and more–how the two stack up against each other is actually pretty crazy.
Intune and Advanced Threat Protection (ATP) are included with the Microsoft 365 Business subscription but not with Office 365 E3. Just quickly: ATP gives us the ability to enable some advanced anti-phishing protections, safe links (all links are wrapped in a Microsoft URL and are scanned at time of the click), as well as safe attachments (attachments are detonated in a sandbox before delivery). Intune does a lot for managing devices of all types (not just mobile devices), but most notably allows us to remotely wipe company data and do factory resets for Windows 10. These features are $2.00/user/month for ATP, and $6.00/user/month for Intune, if purchased separately–but they are INCLUDED in Microsoft 365 Business, making this SKU a killer deal, in my opinion.
On the other hand, E3 does boast “Plan 2” for both Exchange Online and SharePoint Online. So for instance, Exchange plan 1 includes a 50 GB mailbox, whereas plan 2 includes 100 GB. But of course, both of these bundles also add archiving, which makes your mail storage essentially infinite. When it comes to the SMB, probably the main feature that is missing in Microsoft 365 Business, is the ability to use Office applications on a shared computer or terminal/RDS/Citrix server. So if that is a requirement you might be looking toward E3 anyways, OR, consider meeting that requirement another way, with old-fashioned Open licensing for example.
Now in the past, I have always recommended my clients toward Office 365 E3 because of the security features it included, such as DLP, encryption, etc. I also started recommending ATP as an add-on to it (again, $2.00/user/month). Although ATP is included with E5, not many of my clients want/need the E5 features (or care to pay its hefty $35.00/user/month fee). But now, since the Microsoft 365 Business subscription includes all the security goodness that I was already selling/implementing, AND it bundles some new add-ons that I would like my clients to have anyway, I may have to reconsider my “top” recommendation for a small business subscription…
But isn’t Microsoft 365 for “Cloud-only” businesses?
In the past, I may have misrepresented this a bit (and I know that others have also). Actually, I was careful to point out in my recent series on licensing that you can also have this subscription along with a domain environment, but I did emphasize again how it is not necessary to do so. This is because you can get a lot of device management capability from Intune and the built-in Device Management features of the subscription, just by joining your client computers to Azure AD, and therefore joining a local AD is completely optional. However, note that it is indeed possible to get the benefits of this subscription via a “Hybrid Join”–where the client computer is joined to a local Active Directory, as well as Azure AD. Many small organizations still have other applications hosted on-premises, and so have a local Active Directory. No worries, you can still use this subscription as a hybrid organization.
So in short, the more I explore this subscription, the more I love it. I think there are cases where we’ll still be stacking Office 365 E3 and EMS E3 together, where it fits the bill better, but this Microsoft 365 Business SKU solves a lot of small business problems in a single SKU, and for the same price as E3. Plus, it does include a subscription-based copy of Windows 10 Pro (called “Windows 10 Business”), and I can only assume that Microsoft is going to continue offering more carrots (or handcuffs, depending on how you look at it) to get you into a subscription model for the Windows 10 OS, anyway.
Of course, we must keep in mind that there are a couple of gotcha’s when moving down from Pro Plus to Business versions of the Office apps, but that divide has been shrinking, and I think overall it isn’t uncommon for small businesses to be perfectly happy with the “Business” track. I mean, Access is even included at that level now (previously only available via Pro Plus). Nevertheless, if you are non-technical, you should not attempt to make this decision without very careful consideration and maybe even some consulting. So yeah, things keep changing. We should always remain flexible.
|
OPCFW_CODE
|
µVision User's GuideAbout µVision User Interface Creating Applications Debugging Debug Commands ASM ASSIGN BreakAccess BreakDisable BreakEnable BreakKill BreakList BreakSet COVERAGE COVERAGE Overview Report COVERAGE GCOV Export COVERAGE ASM Report COVERAGE MTB Import COVTOFILE DEFINE DIR Display Enter EVALuate EventRecorder EXIT FUNC Go INCLUDE IRLOG ITMLOG KILL LOAD LOG LogicAnalyze MAP MODE Ostep PerformanceAnalyze Pstep RESET SAVE SBC (only Cortex-M) SCOPE SET SIGNAL SLOG Tstep Unassemble WatchSet WatchKill TraceAccessPoint TraceDataPoint TraceDisable TraceEnable TraceHalt TraceKill TraceList TraceRun (ETM) TraceSuspend (ETM) Debug Functions Simulation Flash Programming Dialogs Utilities Command Line Example Programs Appendix
The LOAD command instructs the µVision debugger to load an object file. You can load the object file of the current project when starting the µVision debugger by enabling Options for Target – Debug - Load Application at Startup.
µVision analyzes the contents of the file to determine the file type (if the file type cannot be determined, then an error message is displayed). The following file types are supported:
The LOAD command has several options that depend on the target in use:
The LOAD command supports the specification of an address offset for Cortex-M targets. This allows to adjust the effective addresses of the loaded debugging information and symbols of position independent code. The offset specifies the base address where the bootloader or overlay manager is copying the code at during runtime.
LOAD MYPROG.HEX LOAD %L CLEAR INCREMENTAL 0x2000 // Clear previously loaded program information, and load linker output file with an address adjustment of 0x2000 LOAD MyAxf.axf 0x4000 // Clear previously loaded program information, reset target, and load application MyAxf.axf with an address offset of 0x4000
This command loads myprog.hex.
Support for Key Sequences
A limited number of key sequences can be used in the LOAD command. This enhancement allows using a generic Debugger initialization file across multiple projects. Key sequences supported by the load command are:
The following examples assume that the µVision project file is available in the directory C:\Projects\Blinky and that the output directory is .\Output.
LOAD $L@L.axf // C:\Projects\Blinky\Output\Blinky.axf, loads linker output file LOAD $L@L.hex // C:\Projects\Blinky\Output\Blinky.hex, loads Intel hex file LOAD $L%L // C:\Projects\Blinky\Output\Blinky.axf LOAD %L // C:\Projects\Blinky\Output\Blinky.axf1 LOAD .\Output\Blinky.axf // C:\Projects\Blinky\Output\Blinky.axf
of your data.
|
OPCFW_CODE
|
Docs: Nits for the cheat sheets
const inputLength =
(typeof input === "string" && input.length) || input
Nit: if input is "", inputLength will be "", not 0
Nit: CFA doesn't have keyword highlighting on anything but the first codeblock under "Assignment"
Nit: const is highlighted in blue on Class but in red on other pages
Super-Nit: in Interface, new is highlighted in red and followed immediately by the paren; in Type, new is plain black and followed by a space
Nit: const is plain black in Interface
(
What does "everything but 'static' matches" mean on the types cheatsheat?
Syntax highlighting nits:
const:
Red in CFA > Assignment, Type
Black in CFA, Interface
Blue in Class
new:
Red in Interface, Class
Black in Type (+ followed by space)
get/set:
Red in Class
Black in Interface
Identifiers
Inconsistent blue / black
Types
Inconsistent green / black
Comments
Some comments in Class are a light green, difficult to read due to low-contrast
Property endings use mixed newline / semicolon (see e.g. Interface > Overloads)
CFA -> Assertion Functions uses slanted quotations
CFA > Expressions:
const inputLength =
(typeof input === "string" && input.length) || input
As mentioned above, if input is "", inputLength will be "". This could be replaced by a ternary, but if you wanted to specifically showcase the interaction with &&, consider:
if(typeof input === "string" && input.length > 5){
...
}
(in fact, I'd say have both an inputLength with a ternary and something like ^ to show that it works with both)
https://twitter.com/NoriSte/status/1483107658861359112
Think that's all existing feedback handled (and added my own now I've been away from it enough to have fresh eyes) - https://www.figma.com/file/x8FJrNqj6oupqWn1s3uMg4/TypeScript-Website-Design?node-id=3414%3A9
Will do the asset update on friday
Some remaining syntax highlighting inconsistencies:
Black const under Interface > Get & Set, CFA > * > Usage
Blue const under Class > Common Syntax
Black type under CFA > Discriminated Unions
Blue readonly under Class > Common Syntax
Black get under Class > Decorators and Attributes
Black this under Class > Abstract Classes, Class > Generics (v.s. purple (?) in Class > Common Syntax) (consider red to match other keywords?)
Potential inconsistencies:
Consistently black keywords:
new under Class, CFA > Assertion Functions
if, instanceof, throw under CFA
asserts, is under CFA > Assertion Functions (debatable)
super under Class > Common Syntax
import under Type > Type from Module
Green types under Class > Common Syntax (in implements) (same green as adjacent comments; somewhat confusing)
Identifiers inconsistently blue/black (maybe intentional for emphasis?)
Blue ! (in !:) under Class > Common Syntax (compare all-black ?: on the line above)
Alright, I think the inconsistencies I've left in now are generally all ones I want 👍🏻 (I opted for field name identifiers only to have colorings )
|
GITHUB_ARCHIVE
|
I am trying to modify the table found in the following Link (section 4, Post Event Calculated tonnes) to recreate an identical table that I require:Special Event Follow UpSomeone prior to me create this sheet and I would like to modify the formula and the headings in that specific table only. How can I recreate an identical table with different headings to represent the items I require.
I'm trying to create a 4 x 4 array filled with numbers input by the user. I then need to rotate the grid clockwise like I've shown below.
01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16
13 09 05 01 14 10 06 02 15 11 07 03 16 12 08 04
I made all numbers double digit just for show in the example so the table is easier to read. So far the code I have is just trying to create the array and fill with numbers from the user. This asks for a number once then doesn't display anything at all.
I'm trying to output an HTML page from a simple XML file, but I need to offer this as a service from a website, so users can browse and find the XML file on their local hard drives, and then generate an HTML page.
I think there are some security issues with HTTP not being able to access local files (ie. C: emp.xml) because it doesn't work when I browse to a local file from the website (files on servers work fine). Is there any other way to pass an XML file from local to a website?
var xmlDoc=new ActiveXObject("Microsoft.XMLDOM") xmlDoc.async="false" xmlDoc.load(filepath)
I'm just starting up in web design and I have an interesting challenge that I'm hoping can be solved w/java script. I have a site with an application form. When the form is to be submitted, the form data needs to be emailed to the correct person to handle that particular application. However, that can't be determined by anything specific in the form. It can only be determined by the link that they clicked on to get to the form. I really don't want to have 22 identical forms with just a different EmailTo addie, which is what the previous site developer did. Someone please tell me this can be done w/java script? If not w/java script
Im currently working on a HTML page to add additional input boxes. I have the following code. Does anyone know how I would go about adding the required vlaues to each of the input boxes, for use later on within php?
I want the user to be able to move html elements around and even edit it like add effects like fade in and out etc.
Then after all the changes I want to overwrite the existing php file that does this for that user. how can you make such changes and then save it to a file?
I have to come up with a user authenication page the logs the user in and also gives them access to do the right things. I have attached the code and the access file and have got started on a few things.I first need to create a login page with the fields username and password have that check the access database and then proceed it to a page to do the following depending on the user access. For The Login button to even be enabled the username and password must have a value in it. I have no idea how to do that
Add A User [No duplicate Users] Modify A User Delete A User
On a single form, I need to capture the users input on this input box
HTML Code: <p> <label for="Student ID">Student ID </label> <input type="text" name="sesStudID" /> </p>
to this input box HTML Code: <p><label for="User Name">User Name</label><input type="text" name="sesUserName" disabled="disabled" /></p> so as the user inputs to the first input box it will at the same time appear on the second input box.
Create a function that prompts user for a number. Develop the program so that it continues to prompt until it receives valid information. Then create a multiplication table that displays the number multiplied by 1 through prompted number.
|
OPCFW_CODE
|
- May 15, 2020
Java代写-CS3114 51作业君,最专业的编程辅导服务,一对一包答疑包教会保原创You will write a memory management package for storing variablelength records in a large memory
space. For background on this project, view the tutorial on sequential fit memory managers available
at OpenDSA Chapter 11.
Your memory pool will consist of a large array of bytes. You will use a doubly linked list to keep
track of the free blocks in the memory pool. This list will be referred to as the freeblock list. You
will use the circular first fit rule for selecting which free block to use for a memory request. That
is, the keep track of the ”current” free block, and the first free block in the linked list that is large
enough to store the requested space will be used to service the request (if any such block exists).
If not all space of this block is needed, then the remaining space will make up a new free block and
be returned to the free list.
Be sure to merge adjacent free blocks whenever a block is released. To do the merge, it will be
necessary to search through the freeblock list, looking for blocks that are adjacent to either the
beginning or the end of the block being returned. Do not consider the first and last memory
positions of the memory pool to be adjacent. That is, the memory pool itself is not considered to
The records that you will store will contain the xycoordinates and name for a city. Aside from the
memory manager’s memory pool and freeblock list, the other major data structure for your project
will be a hash table that supports search on the city names. The hash function will work on the city
name, but the table will actually store the ”handles” to the data records that are currently stored in
the memory pool. A handle is the value returned by the memory manager when a request is made
to insert a new record into the memory pool. This handle is used to recover the record. For a review
on hashing, please read chapter 10 from OpenDSA. You should adopt the character summation
hashing function. You will use closed hashing with linear probing as a collision resolution policy.
Invocation and I/O Files
The program will be invoked from the command-line as:
The name of the program is Memman. Parameter is the initial size of the
memory pool (in bytes) that is to be allocated. (Note that if the memory pool is ever not large
enough, then you will increase the size of the array by an increment of initial-pool-size).
Parameter is the size of the hash table that holds the handles to the records stored in
the memory pool. (You will not be expected to increase the size of the hash table). Your program
will read from text file a series of commands, with one command per line. The
program should terminate after reading the end of the file. No command line will require more than
80 characters. The formats for the commands are as follows. The commands are free-format in that
any number of spaces may come before, between, or after the command name and its parameters.
All coordinates will be signed values small enough to fit in an integer variable. All commands
should generate a suitable output message (some have specific requirements defined below). All
output should be written to standard output.
insert x y name
Insert a new city record into the hash table and the memory manager. Parameters x and y are teh
xy-coordinates for the record, and name is the name of the city for this record. Name may consist of
upper and lower case letters and the underscore symbol. If there is no room in the memory pool to
handle the request, then grow the array size by creating a new array that is from the hash table. If there is no such record, print a suitable
message. Make sure to also remove the record from the memory manager.
Print out all records (coordinates and name) that matches name . If there is no such record,
then print a suitable message.
Dump out a complete listting of the contents of the data base. This listing should contain two
parts. The first part is a listing of the hash table’s contents, one record per line. If a given slot in
the table has no record, then print [EMPTY]. Print the value of the position handle along with the
record. The second part is a listing of a the free blocks, in order of their occurence in the freeblock
Your main design concern for this project will be how to construct the interface for the memory
manager class. While you are not required to do it exactly this way, we recommend that your
memory manager class include something equivalent to the following methods:
// Constructor. poolSize is the size of the memory pool in bytes
// Insert a record and return its position handle // space contains the record to be inserted, of
length size Handle insert(byte space, int size)
// Free a block at the position specified by theHandle
// Merge adjacent free blocks
void remove(Handle theHandle)
// Return the number of butes actually copied into space int get(byte, Handle theHandle,
// Dump a printout of the freeblock list
Another design consideration is how to deal with the fact that the records are variable length. One
option is to store the record’s handle and length in the record array. An alternative is to store the
record’s length in the memory pool along with the record. Both implementations have advantages
and disadvantages. We will adopt the second approach.
The records stored in the memory pool must have the following format. The first byte will be the
length of the record, in bytes. Thus, the total length of a record may not be more than 255 bytes.
The next four bytes will be the xcoordinate. The following four bytes will be the y coordinate.
Note that the coordinates are stored in binary format, not ASCII. The city name then follows the
coordinates. You should not store a NULL terminator byte for the string in the memory pool
For this project, you may only use standard Java classes, Java code that you have written yourself,
and Java code supplied by the CS3114 instructor. You may not use other thirdparty Java code.
You may not use any builtin Java list classes for this assignment.
Programming Standards You must conform to good programming/documentation standards.
Web-CAT will provide feedback on its evaluation of your coding style, and be used for style grading. Beyond meeting Web-CAT’s checkstyle requirements, here are some additional requirements
regarding programming standards:
• You should include a comment explaining the purpose of every instance variable or named
constant you use in your program.
• You should use meaningful identifier names that suggest the meaning or purpose of the
constant, variable, function, etc. Use a consistent convention for how identifier names appear,
such as “camelCasing”.
• Always use named constants or enumerated types instead of literal constants in the code.
• Source files should be under 600 lines.
• There should be a single class in each source file. You can make an exception for small inner
classes (less than 100 lines including comments) if the total file length is less than 600 lines.
We can’t help you with your code unless we can understand it. Therefore, you should no bring
your code to the GTAs or the instructors for debugging help unless it is properly documented and
exhibits good programming style. Be sure to begin your internal documentation right from the
You may only use code you have written, either specically for this project or for earlier programs,
or code provided by the instructor. Note that the OpenDSA code is not designed for the specific
purpose of this assignment, and is therefore likely to require modi cation. It may, however, provide
a useful starting point.
You will implement your project using Eclipse, and you will submit your project using the Eclipse
plugin to WebCAT. Links to the WebCAT client are posted at the class website. If you make
multiple submissions, only your last submission will be evaluated. There is no limit to the number
of submissions that you may make. You will submit the zip file of your project on Canvas as well.
You are required to submit your own test cases with your program, and part of your grade will be
determined by how well your test cases test your program, as defined by Web-CAT’s evaluation
of code coverage. The names of your test files should include the word “Test”, so that Web-CAT
knows what they are. Of course, your program must pass your own test cases. Part of your grade
will also be determined by test cases that are provided by the graders. WebCAT will report to you
which test les have passed correctly, and which have not. Note that you will not be given a copy
of these test files, only a brief description of what each accomplished in order to guide your own
testing process in case you did not pass one of our tests.
When structuring the source files of your project, use a flat directory structure; that is, your source
files will all be contained in the project “src” directory. Any subdirectories in the project will be
ignored. You are permitted to work with a partner on this project. When you work with a partner,
then only one member of the pair will make a submission. Be sure both names are included in the
documentation. Whatever is the final submission from either of the pair members is what we will
grade unless you arrange otherwise with the GTA.
Your project submission must include a statement, pledging your conformance to the Honor Code
requirements for this course. Specifically, you must include the following pledge statement near the
beginning of the file containing the function main() in your program. The text of the pledge will
also be posted online.
// On my honor:
// I have not used source code obtained from another student,
// or any other unauthorized source, either modified or unmodified.
// All source code and documentation used in my program is
// either my original work, or was derived by me from the
// source code published in the textbook for this course.
// I have not discussed coding details about this project with
// anyone other than my partner (in the case of a joint
// submission), instructor, ACM/UPE tutors or the TAs assigned
// to this course. I understand that I may discuss the concepts
// of this program with other students, and that another student
// may help me debug my program so long as neither of us writes
// anything during the discussion or modifies any computer file
// during the discussion. I have violated neither the spirit nor
// letter of this restriction.
Programs that do not contain this pledge will not be graded.
|
OPCFW_CODE
|
How do I process photos on a Mac so they look right on Windows computers?
Apple computers use another color profile different than the one used by Windows. Therefore, images are displayed differently in a Mac computer than in a Windows machine.
Processing a picture in Photoshop in a Mac in order to show it on Internet, maybe is not the best idea right? The resulting image in your screen will not be the same many users will see in the website you want to show it.
Which would be the best way to deal with this problem?
I notice a big difference in the blacks and whites. Mac computers usually display images with a bigger contrast.
Actually, images are display differently on every non-calibrated displays. If they do not, it is purely coincidental. The best way is to solve the problem for the general case:
Make sure your images look the way you think they look by using a calibrated display.
Embed the image profile in images so that other computers know how to interpret them.
For sharing, you really should use sRGB, which is the defacto-standard. Applications not aware of color-management have the most chances of showing that close to what it should be.
So, saving them as JPG (not as web) and checking the 'ICC Profile: sRGB IEC61966-2.1' ? Would that include the profile used?
That should do it. Also if you have a display calibrated to something else, remember to check Proof Colors so that you see what is being saved.
There is no definitive way to make images appear exactly the same on any two computers running the same OS, let alone two different operating systems.
Windows and Mac operate a different default Gamma value than Mac and that causes most of the difference with contrast appearance between the two operating systems, but you will notice a difference between any two screens on the same OS as no two screens can be identically colour managed due to physical differences at manufacture and the external ambient light hitting the display.
The quick and cheap way to get closest to a solution for your problem is to choose a colour profile from within Photoshop that suits both OS's such as Adobe RGB or sRGB.
You can spend a bit of money and get calibration hardware and software that will get you close, but it will never be perfect.
Hope this helps.
Images will always look different on different computers. The best you can do is to make sure that your own screen is calibrated so that you have a consistent baseline.
Even calibrated screens will not show images exactly the same. Different screens have different limitations, and calibration tools can not measure the light in the exact same way that eyes see it.
Also the viewing angle can affect how you see an image. If you display the same image at the top, bottom, right and left on a screen, you will see quite a big difference between them on some screens.
For publicing an image on the web, the sRGB color space is recommended, as that is supposed to be the native color space of an average generic uncalibrated monitor. Even if the target monitor is calibrated, it makes good use of the color space.
|
STACK_EXCHANGE
|
Numerical inaccuracy calculating intersection
I want to calculate intersections between a ray and a segment. For this I form the linear equation and look for an intersection. Now I encounter a numerical problem for one example. An abbreviation of my code:
public class Test {
public static void main(String[] args) {
double rayAX =<PHONE_NUMBER>3858895d;
double rayAY =<PHONE_NUMBER>845833d;
double rayBX =<PHONE_NUMBER>79195d;
double rayBY =<PHONE_NUMBER>4924565d;
double segAX = 450.0d;
double segAY =<PHONE_NUMBER>2127828d;
double segBX =<PHONE_NUMBER>79195d;
double segBY =<PHONE_NUMBER>4924565d;
double a1 = (rayBY - rayAY) / (rayBX - rayAX);
double t1 = rayAY - rayAX * a1;
double a2 = (segBY - segAY) / (segBX - segAX);
double t2 = segAY - segAX * a2;
double x = (t2 - t1) / (a1 - a2);
double y = a1 * x + t1;
System.out.println(x);
System.out.println(y);
}
}
Obviously the return should be<PHONE_NUMBER>79195,<PHONE_NUMBER>4924565) as this point is the same on both the ray and the segment.
But the actual return is in my case<PHONE_NUMBER>7919506,<PHONE_NUMBER>4284058)
In the second number there is an error already in the sixth decimal place.
I guess the error is because the values rayAX and rayBX are very close to each other. My question is: Can I get a more precise result when calculating the intersection?
Numerical problems are always likely when your system of equations is close to singular.
Here's a more numerically stable way of getting the intersection (note that it's actually the intersection of two lines... it seems like your original code didn't check if the intersection was within the segment either):
double rX = rayBX - rayAX;
double rY = rayBY - rayAY;
double sAX = segAX - rayAX;
double sAY = segAY - rayAY;
double areaA = sAX * rY - sAY * rX;
double sBX = segBX - rayAX;
double sBY = segBY - rayAY;
double areaB = sBX * rY - sBY * rX;
double t = areaA / (areaA - areaB);
// if t is not between 0 and 1, intersection is not in segment
double x = (1 - t) * segAX + t * segBX;
double y = (1 - t) * segAY + t * segBY;
Rough explanation: Let A and B be the endpoints of the ray, and let X and Y be the endpoints of the segment. Let P be the intersection point we're looking for. Then, the ratio of PX to PY is equal to the ratio of the area of ABX to the area of ABY. You can calculate the area using cross products, which is what the code above is doing. Note how this procedure only uses one division, which helps to minimize the numerical instability.
This is surely more accurate! Do you have a link where I can get more information about this?
If I want to intersect two segments I guess I can find out another t if I calculate a new areaA and a new areaB by just switching "seg" and "ray" during the calculation? For this new t I can also check if it is between 0 and 1.
Do you have any link to a paper or where this approach is generally described? Or did you come up with this method by yourself?
Sorry, I don't have a reference. It's just something I came up with myself based on general geometry knowledge. This looks like a good resource though: https://www8.cs.umu.se/kurser/TDBA77/VT06/algorithms/BOOK/BOOK4/NODE184.HTM.
|
STACK_EXCHANGE
|
How to average to 1 minute
I am trying to average to 1 min. In my data I have some data points that were taken within the same minute and this is not letting me plot my data. For example my data time stamps might look like:
2019-12-04 16:59:27
2019-12-04 16:59:27
2019-12-04 16:59:28
2019-12-04 16:59:29
2019-12-04 16:59:29
2019-12-04 16:59:30
How do I average this so that it consolidates those duplicate data points into a 1 min average?
You can remove the seconds, and if they have the same time in minutes (16:59), they shall fall in the same minute. And what you need is just to collapse the duplicates.
It may be worth showing how you are trying to plot this as there is no reason why you cannot plot by the second
@PanwenWang That won't work as I have 59 data points for each minute, I'm trying to eliminate just those duplicated seconds because I need the data from every second of every minute.
Please share a line of your desired output, I'm not clear on what you are looking for.
For anything dates/times, the lubridate package is the way to go. In this case, you want round_date()
library(lubridate)
library(dplyr)
#First, create your dataset (at least, what I think it might look like)
df <- tibble(
time = ymd_hms(c(
"2019-12-04 16:59:27" ,
"2019-12-04 16:59:27" ,
"2019-12-04 16:59:28",
"2019-12-04 16:59:29",
"2019-12-04 16:59:29",
"2019-12-04 16:59:30"
))
) %>%
mutate(time = round_date(time, unit = "minutes")) %>% #Round the time variable to the nearest minute.
distinct() #remove duplicate rows.
The output:
# A tibble: 2 x 1
time
<dttm>
1 2019-12-04 16:59:00
2 2019-12-04 17:00:00
UPDATE: Looks like you're just looking for distinct rows, in which case just the distinct() function will do.
library(lubridate)
library(dplyr)
#First, create your dataset
df <- tibble(
time = ymd_hms(c(
"2019-12-04 16:59:27" ,
"2019-12-04 16:59:27" ,
"2019-12-04 16:59:28",
"2019-12-04 16:59:29",
"2019-12-04 16:59:29",
"2019-12-04 16:59:30"
))
) %>%
distinct() #remove duplicate rows.
Output 2:
time
<dttm>
1 2019-12-04 16:59:27
2 2019-12-04 16:59:28
3 2019-12-04 16:59:29
4 2019-12-04 16:59:30
But I need to just eliminate the duplicated seconds as I need the data for every second of every minute
Okay, I'll try to update accordingly (P.S. It's good practice to share your desired output)
If this answers your question and you could mark it as the answer, it would be appreciated!
|
STACK_EXCHANGE
|
This may be a stretch -- a great big stretch -- but it's a "what if" that I'd like to explore: What if the a game company offered custom content for your campaign? What if this content includes all the usual features of standard RPG game supplements? What if it was awesome. Better than anything you could get from WotC, Paizo, or dare I say White Wolf... just for you. What it be worth to you?
RPG companies follow a model where a group of people sit around who are "experts" and think of really cool ideas and then put then into action. They create what they want, or what they believe will sell well (read: what their customers want) - generally - and they may get feedback from the community, tweak things a bit, and eventually publish their product(s). The end result is that you, as a gamer, are faced with either taking what they produce and ADAPTING IT to fit your own home brew or variant campaign; or buy nothing at all.
More often than not the adapting part happens just about every time. Even with new campaigns. This has happened to me countless times: I buy some cool book; try to plug it into my game and then.. ding; I end up tweaking things to make it work. Endlessly. Not that I hate this part that much... sometimes this can be the most fun of a campaign build, but...
Do I have the same amount of time for campaign development as I did when I was 20? or even 30?
What if the tables were turned and you, the gamer, could go to an RPG company and ask them to produce something for you? Custom content is created all the time in other industries for very small venues. Heck, even in my industry (biotech), companies thrive on being one stop shops for nothing but custom solutions for their customers.
Imagine for a moment the following scenarios:
Scenario 1 - The Store Bought Campaign Became Your Home Brew Campaign
Mike & Jan co-DM a campaign that has been running for several years. It started out as a Forgotten Realms campaign, but over time it morphed into their own thing. Sure, the world map is the same and there are lots of things that stayed true to the FRCS canon, but there's also tons of new, custom content that makes it their own. Now, Mike and Jan are busy people. In college they had tons of time, but now with Real Life knocking on the door - time is short, but they still want to game. They have tried shoehorning store bought adventures - but they really don't fit well. Plus, who has time to slog through the hundreds of published adventures out there?
Scenario 2 - The Genre MashUp
Bill has a campaign he's recently started with some friends that mashes up Cthulu with SciFi and Western genres. Nothing, barely, is available to purchase anywhere that fits this bill. Time is short, and adapting store bought material to this unique setting takes time. What can he do?
Scenario 3 - Your World vs. Your Player's World
You've built a great new campaign setting from scratch. You've mapped it out, designed a few cities, jotted notes about a few important groups or factions, set up various story lines, plot hooks or HEX nooks (more about those another day). You're ready to play, and play you do.. and then your players head in the opposite direction you were hoping.
OMG... your rigid campaign has suddenly turned all sandboxy on you!
Fast on your feet, you adapt your world to meet the needs of the game. Your players don't know the difference, but between game sessions you're spending hours pushing out the envelope and rolling out the campaign carpet so that no matter where they go.. there you are. Isn't this exhausting though? What if you could phone or send an email to a team of game developers and say "Hey... I need your help! I've got this game going great, but I need an adventure for next week's session and I don't have the time to put it together!"? Would you?
I know this all sounds ridiculous... but with today's super-social-networked-crazy-crowd-sourcing-meta-mind it should be possible. Right? Why hasn't any company TRIED this?
SUPER GAMES, Inc.We outsource your needs to build the campaign you want for the game system you want.
How valuable would a custom RPG gaming product be for your game? How much time do you spend prepping for gaming each week? What if you could reduce that time to ... oh... and hour? Less?
I'm not asking 'how much would you pay' -- that's a different question. This is not about price.. because this is a ridiculous idea anyway.. it's about VALUE. What would a service like this be worth to you?
OK. You have a number in your head. Let me know in the comments.
Of course... you could also always take this approach, which I'm sure is what we are all basically doing now anyway.
|
OPCFW_CODE
|
Let's start with a caveat: I haven't seen or handled this device yet. Indeed many hours after its launch it wasn't even on Apple Australia's website, though it is now. Rather I'm basing my response on Apple's published material.
Firstly, this thing will sell, and sell lots. Apple is a good marketing company and this will sell. I've no doubt that it will change and morph over time and that will make it sell more. But right now here's my scoreboard.
iPad = iPhone - voice + screen size - portability
The key issue here is that this thing feels like a first edition. We don't see any startling new developments in multi-touch; we don't see any startling new capability; it seems like a larger, less portable iPhone. I think the screen real estate will provide an awesome browsing, reading game-playing experience. The problem is that iPhone "works" because it's in your pocket. Not because it's the best and brightest screen to work on or the simplest interface, rather it works because it is the best compromise and above all it's with you.
Not so the iPad, it's too big to be "always-with-you" and I'm sorry Steve but I don't think it's small enough (or sexy enough) to be "intimate".
For me though the killer is that it isn't a productivity machine. If this was to work for me I need to be able to leave a gadget at home when I travel. At the moment I travel with iPhone and MacBook. I need iPhone for voice communication and I very much like its app ecosystem (though not its closed nature) and its pocketability. I use the MacBook for real work though, composing lengthy emails, working on documents, spreadsheets, databases. In addition my MacBook has two Java apps that I cannot do without. One of them is an XML editor and the other is the local client for our Component Content Management System. I can't travel without them. Simple, end of story.
In addition every bit of productivity work I do sees me switching between applications - often the email app and the browser with a word processor interspersed from time to time. Or alternatively the local client and the XML editor. The iPad doesn't support multi-tasking and even though I'm a bloke and I'm not supposed to be able to, I do multi-task.
That means that when I travel I still need my iPhone - the iPad has no voice comms capability (yes I know about VOIP over 3G but I need real ubiquitous calling capability); I also need my MacBook because the iPad won't run my workday applications. The question then becomes "Does the iPad add sufficient value that I can add it's 600g or so to my carry on baggage?" Despite the welcome addition of iWork the answer is no.
There is one "maybe" though. It does make me wonder whether you might ditch the iPhone and revert to a $100 simple mobile phone and then the iPad might have a place alongside the MacBook. In the end I think that it just means that your bag got heavier for not enough reason really.
So the iPad doesn't create a place for itself in my bag. Three simple additions would get it there though: multi-tasking, support for Java apps and a move away from the closed App Store ecosystem to allow me to place my applications on there. Oh and a fourth thing: a real and accessible file system.
Beyond that I'd like support for modern wide-screen formats, HDMI out, an iSight style camera (making a cool Skype conferencing device) and a true "next-gen" multitouch interface.
What do you think?
|
OPCFW_CODE
|
"Must of " vs "must have"
I was browsing a completely unrelated site and came across the following interesting discussion on the ever increasing proliferation of the phrase, "must of":
... You mean "must have", btw. Or "must've". Spelling it 'must of' is wrong.
I suspect that "must of" is one of those phrases which is on the cusp of changing from "ever so wrong" into something that is perfectly acceptable.
I am interested in finding historical examples of a similar phrase moving to mainstream despite its initial non-aligment with conventional grammatical rules, and would be grateful if anyone could provide some concrete examples.
For some of them, very likely. For others, not so much. However, I think this question would fit better at crystalball.stackexchange.com. It is hard, if not impossible to predict how language will evolve, as it depends on its users — and in the case of English, they are many and diverse! I'm afraid that speculation about such future evolution is too broad for this site.
@oerkelens very funny. I am surprised that the evolution of language is considered off topic here, especially as it has been studied quite seriously by eminent names in the field.
It's not the evolution, its future evolution. If there were some laws making a particular phrase more likely to move "to mainstream", there would've been something to discuss.
If you rephrased your question to "could you provide historical examples of a similar phrase moving to mainstream despite its initial non-alighment with grammar rules?", then there might've been something interesting to read in the answers.
@CopperKettle thanks for the constructive advice - will update accordingly :)
Can we vote on which we think will become accepted sooner, must of or prolly?
Using of in place of have (especially in "should've") has likely crossed the threshold in British English already. I don't have it in front of me right now, but I believe the dialogue in Harry Potter and the Philosopher's Stone uses of exclusively. (It's global-searched-and-replaced to -'ve for the American Sorcerer's Stone.)
@JohnY having read all of them (a guilty pleasure!) British English versions, I don't recall having seen "of" used in this context, and I am sure I would have remembered it, since I find this kind of deliberately unconventional grammar quite jarring! Unless it was used for character development of course. I will check later ...
@JohnY Rowling, I belive, read Classics at university, so I find it hard to believe she used "of" naively
"some concrete examples" - is a concrete example as concrete examples came along well after "concrete" was used.
@HansAdler Didn't its use proliferate for phonetic reasons and speakers' unfamiliarity with its written form?
It used to be that people said "I had rather do this" (contracted to "I'd rather").
Nowadays, this sounds incredibly old-fashioned. People say "I would rather" (contracted to "I'd rather").
Of course, they're identical when they're contracted, which is probably responsible for the shift.
Might could be (ngram backs you up on the general shift), but the fact that would rather is also semantically the more sensible of the two options probably plays into it as well.
You could look to to old meanings of words and check the time period it changed.
An example would be prove, like "The exception that proves the rule." as prove today means to demonstrate as true by evidence.
Today that sentence means "The exception provides evidence for the existence of the rule."
However, the old meaning of prove was to test the rule. Therefore, "The exception that proves the rule." used to mean "The exception that tests (the limits of) the rule."
The meaning shifted some time during/after the renaissance, I presume, as the old meaning of prove is categorized as Middle English.
I figure you're trying to figure what other words might have meanings that change, but that is hard to tell. Spellings can change based on understanding of colloquial English, but there are hundreds of different dialects, so what sounds like how something can be spelled in one dialect can be totally different in another.
Looking to the future, we now basically have an almost universally accepted dictionary of the English language all online, so any changing to the meanings of words will come about much more slowly and likely be debated and disputed for a long time.
Hope that helps!
Welcome to ELU. "Answers" should attempt to answer the question (which was about must of), not engage in further discussion. This is different than forums. Please take the site tour and visit the help center for guidance on how to use this site.
You seem to have misread the question. Kindly try again. Zavtra was completely on topic.
One of the more notorious expressions (the OP asked for a phrase) which was considered absolutely incorrect in its infancy, but has today become so widespread that many speakers are unaware that it is, semantically speaking, contradictory is
could care less
Wiktionary offers a descriptivist's approach
could care less
(idiomatic, US) Lacking interest; having apathy towards.
Clipping of couldn't care less, which is literally accurate (having no ability to care less).
Usage notes: This expression is a malapropism, since the literal meaning of this version is the opposite of the intended meaning.
Another infamous example is fewer vs. less
Is it ‘less items’ or ‘fewer items’?
Is “There were less people than I thought” unacceptable compared to “There were fewer people than I thought”?
There will be staunch defenders on both sides of the tussle. Neither are wrong. Or maybe I should I have said, neither is wrong?
Another example is to beg the question. It was properly a philosophical term of art that meant "to take more than one should from the question", to assume the truth of some part of the issue under discussion without proof or explanation. That's still the only definition offered by the proper OED entry but Oxford Dictionaries Online now gives the more sensible modern misunderstanding—"to invite an obvious follow-up question"—as its first result.
It's completely replaced the original meaning of the fairly common phrase, except among sumpsimus philosophers who believe technically correct is the best kind.
There are also several examples that continue to irk across English's dialect lines. Brits inexplicably speak of gents and ladies without a possessive marker in sight, while Americans insist that their gas stations have bathrooms. (Brits generally used to and some continue to use it the same way but now more often take it as a point of local pride to resist the usage.) Things can also go the other way: several features of Appalachian English are considered to represent the area's lack of education when they actually preserve aspects of the pronunciation, grammar, and diction brought over with the area's settlement. Ax for ask goes back to Old English and so does the idea of adding prefixes like a- to the beginning of verbs like a-fixin' to go get sth.
|
STACK_EXCHANGE
|
Hello there, well, here are some reports I’'d like to make:
Well, first of all, after testing v0.3.1 and 0.3.2, gotta say that there are a lot of issues I was going to post that were solved, like a massive bug I found in Wadanohara and the Great Blue Sea:
And the massive bug Ib had near most of the endings:
In both cases the character can’t move and the game can’t continue (Although in WATGBS you can pause the game and leave, the problem doesn’t get solved). Tell me if you want details of these two issues.
Anyway, the point is that now I only have two issues to report: in Charon games the player still can’t find some pictures, so there are a lot of scenes that lose sense:
Plus, when testing 0.3.2 found out that everytime that a problem appears, the player crashes, so now these games are unplayable at all (v0.3.1 and 0.3 only showed the yellow message saying that the image wasn’t found, plus, v0.3.1 sometimes crashes brutally the PC after some scenes). Now, something important to mention related to this is that this issue did actually not happen in v0.2.2. The games showed normally and worked almost perfectly fine (despite music issues, of course):
The other issue happens in The Gray Garden. In the map of the Cave the character can walk over walls and pass through them, being able to walk along all the screen:
I’m pretty sure there are some more issues related to gameplay, but I saw that you were already informed of them, so that’s all for this part.
The first time I came here I posted saying that I couldn’t get midi to work on Wii port. Well, up to today, that problem is still there, plus, seems that 0.3 recognized even less music mp3 files than v0.2.2. Although I know that solving this problem completely is impossible due to some of the console’s properties if I’m not wrong, I noticed that when testing. I also know that music issues are the last thing the player should be corrected in, but it’s something that bothers sometimes, you know. Anyway, gotta say that if most of the issues related to music were fixed in v0.3.2 besides the ones you mentioned in the blog like OFF’s battle music, I haven’t seen them because I’m testing it right now. I also hope for the player to be able to transpose songs and loop .mid files that have a controller event 111 assigned someday. That’s it for music.
Well, I could continue posting more stuff, but I think it’s enough for now, just two more things to say:
If 3DS version comes to reality, I’ll be able to test it. And if you want me to record something, tell me. No more stuff to say, I’m open to questions or suggestions (I’d like to know if you can put images in spoilers so they don’t disturb the reading that much)
If I get this correct the first two bugs are resolved already? In that case we don’t need more information and are just happy
Concerning the other two issues: Can you provide us download links to the games you used and savegames shortly before the reported issues?
0.3.2 has another crash bug? Ugh, that’s unfortunate.
Unfortunately the Wii version has very low priority. There isn’t even a maintainer for it, we just keep it alive by recompiling… But in theory Midis should work, must be checked again. But the Audio was always really bad on the Wii and from our side this is not easily fixable because it’s an issue of the library we use (which is also not maintained anymore ^^)
[quote=“Ghabry”]Hello, thanks for the detailed bug report.
If I get this correct the first two bugs are resolved already? In that case we don’t need more information and are just happy ;)[/quote]
Yep, that’s correct, just wanted to mention them
[quote=“Ghabry”]Concerning the other two issues: Can you provide us download links to the games you used and savegames shortly before the reported issues?
0.3.2 has another crash bug? Ugh, that’s unfortunate.[/quote]
Yanderella: d-indiegames.blogspot.com.ar/201 … rella.html
The Gray Garden: vgperson.com/games/graygarden.htm Link for TGG Save right before entering the Cave
The file has a password, which is the word that you put at the end of an account registration on the forum (you know, the last thing that goes to check that the person who is registering is a person indeed, just tried to think in a creative password :P)
In Yanderella you can’t save up to the point where I took the screenshots, plus, as I said, in v0.3.2 the player crashes some seconds after pressing Start, so I can’t provide a savefile for this. Something I haven’t mentioned: checking the log file it says that the issue that causes the crash is the same (Image not found: Chipset), only with the difference that this version crashes and the previous ones continue with missing backgrounds.
Here are some other games of the same author that have the same issue as Yanderella: Mix Ore Makoto Mobius The Dark Side of the Red Riding Hood Mikoto Nikki
I see… well, it doesn’t matter that much, if you happen to find someone to continue with the work, tell me (I know anything related to software programming so sadly I can’t help you with that)
|
OPCFW_CODE
|
'''
Testing script for YOLO.
1) Read the file names from the input folder to a list.
2) Load the files using pyTorch dataloader.
3) Feed the dataset by batches into the evaluation model.
4) Use NMS on the output.
5) Draw the bounding box(es) on the images and write the modified images into the output folder.
'''
import sys
from tqdm import tqdm
import torch
from torch.utils.data import DataLoader
from yolo_net import YOLO
from load_data import ToTensor, LoadTestData
from utils import draw_box
from post_process import PostProcess
import cfg
#load the saved yolo model.
SAVED_MODEL = torch.load(cfg.TRAINED_MODEL_PATH_FOLDER+cfg.TRAINED_MODEL_NAME)
YOLO.load_state_dict(SAVED_MODEL)
YOLO.eval() #yolo evaluation mode.
TEST_DATA = LoadTestData(resized_image_size=cfg.TEST_IMAGE_SIZE, transform=ToTensor(mode='test'))
#check if the anchor sizes are available for the given image size.
try:
_ = TEST_DATA.all_anchor_sizes[str(cfg.TEST_IMAGE_SIZE)]
except KeyError:
print("No anchors for the given image size!")
sys.exit()
DATALOADER = DataLoader(TEST_DATA, batch_size=cfg.BATCH_SIZE, shuffle=False, num_workers=4)
POST_PROCESSING = PostProcess(box_num_per_grid=cfg.K, feature_size=cfg.TEST_IMAGE_SIZE//cfg.SUBSAMPLED_RATIO, anchors_list=TEST_DATA.anchors_list)
for i, sample in tqdm(enumerate(DATALOADER)):
batch_x = sample['image'].cuda()
outputs = YOLO(batch_x)
nms_output = POST_PROCESSING.nms(predictions=outputs.detach().clone().contiguous())
draw_box(image_tensor=batch_x.detach().clone(), pred_tensor=nms_output.detach().clone(),
classes=cfg.CLASSES.copy(), output_folder=cfg.OUTPUT_FOLDER_PATH, conf_thresh=cfg.CONFIDENCE_THRESH, start=i*cfg.BATCH_SIZE)
|
STACK_EDU
|
Recovering a 3D human body shape from a monocular image is an ill-posed problem in computer vision with great practical importance for many applications, including virtual and augmented reality platforms, animation industry, e-commerce domain, etc.
HumanMeshNet: Polygonal Mesh Recovery of Humans
3D Human Body Reconstruction from a monocular image is an important problem in computer vision with applications in virtual and augmented reality platforms, animation industry, en-commerce domain, etc. While several of the existing works formulate it as a volumetric or parametric learning with complex and indirect reliance on re-projections of the mesh, we would like to focus on implicitly learning the mesh representation. To that end, we propose a novel model, HumanMeshNet, that regresses a template mesh's vertices, as well as receives a regularization by the 3D skeletal locations in a multi-branch, multi-task setup. The image to mesh vertex regression is further regularized by the neighborhood constraint imposed by mesh topology ensuring smooth surface reconstruction. The proposed paradigm can theoretically learn local surface deformations induced by body shape variations and can therefore learn high-resolution meshes going ahead. We show comparable performance with SoA (in terms of surface and joint error) with far lesser computational complexity, modeling cost and therefore real-time reconstructions on three publicly available datasets. We also show the generalizability of the proposed paradigm for a similar task of predicting hand mesh models. Given these initial results, we would like to exploit the mesh topology in an explicit manner going ahead.
In this paper, we attempt to work in between a generic point cloud and a mesh - i.e., we learn an "implicitly structured" point cloud. Attempting to produce high resolution meshes are a natural extension that is easier in 3D space than in the parametric one. We present an initial solution in that direction - HumanMeshNet that simultaneously performs shape estimation by regressing to template mesh vertices (by minimizing surface loss) as well receives a body pose regularisation from a parallel branch in multi-task setup. Ours is a relatively simpler model as compared to the majority of the existing methods for volumetric and parametric model prediction (e.g.,Bodynet). This makes it efficient in terms of network size as well as feed forward time yielding significantly high frame-rate reconstructions. At the same time, our simpler network achieves comparable accuracy in terms of surface and joint error w.r.t. majority of state-of-the-art techniques on three publicly available datasets. The proposed paradigm can theoretically learn local surface deformations induced by body shape variations which the PCA space of parametric body models can't capture. In addition to predicting the body model, we also show the generalizability of our proposed idea for solving a similar task with different structure - non-rigid hand mesh reconstructions from a monocular image.
We we propose a novel model, HumanMeshNet which is a Multi-Task 3D Human Mesh Reconstruction Pipeline. Given a monocular RGB image (a), we first extract a body part-wise segmentation mask using Densepose (b). Then, using a joint embedding of both the RGB and segmentation mask (c), we predict the 3D joint locations (d) and the 3D mesh (e), in a multi-task setup. The 3D mesh is predicted by first applying a mesh regularizer on the predicted point cloud. Finally, the loss is minimized on both the branches (d) and (e).
- We propose a simple end-to-end multi-branch, multi-task deep network that exploits a "structured point cloud" to recover a smooth and fixed topology mesh model from a monocular image.
- The proposed paradigm can theoretically learn local surface deformations induced by body shape variations which the PCA space of parametric body models can't capture.
- The simplicity of the model makes it efficient in terms of network size as well as feed forward time yielding significantly high frame-rate reconstructions, while simultaneously achieving comparable accuracy in terms of surface and joint error, as shown on three publicly available datasets.
- We also show the generalizability of our proposed paradigm for a similar task of reconstructing the hand mesh models from a monocular image.
|
OPCFW_CODE
|