Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
go-ping loses 100% of packets while vanilla ping works
This looks pretty odd to me...
Can anyone please point me a direction?
I'm on macOS 15.1.1, if this helps.
○ $GOPATH/bin/ping -c 10 <IP_ADDRESS>
PING <IP_ADDRESS> (<IP_ADDRESS>):
^C
--- <IP_ADDRESS> ping statistics ---
10 packets transmitted, 0 packets received, 0 duplicates, 100% packet loss
round-trip min/avg/max/stddev = 0s/0s/0s/0s
○ ping -c 10 <IP_ADDRESS>
PING <IP_ADDRESS> (<IP_ADDRESS>): 56 data bytes
64 bytes from <IP_ADDRESS>: icmp_seq=0 ttl=47 time=243.834 ms
64 bytes from <IP_ADDRESS>: icmp_seq=1 ttl=47 time=221.254 ms
64 bytes from <IP_ADDRESS>: icmp_seq=2 ttl=47 time=243.035 ms
Request timeout for icmp_seq 3
64 bytes from <IP_ADDRESS>: icmp_seq=4 ttl=47 time=239.598 ms
64 bytes from <IP_ADDRESS>: icmp_seq=5 ttl=47 time=230.944 ms
64 bytes from <IP_ADDRESS>: icmp_seq=6 ttl=47 time=243.211 ms
64 bytes from <IP_ADDRESS>: icmp_seq=7 ttl=47 time=243.139 ms
64 bytes from <IP_ADDRESS>: icmp_seq=8 ttl=47 time=243.639 ms
64 bytes from <IP_ADDRESS>: icmp_seq=9 ttl=47 time=217.155 ms
--- <IP_ADDRESS> ping statistics ---
10 packets transmitted, 9 packets received, 10.0% packet loss
round-trip min/avg/max/stddev = 217.155/236.201/243.834/9.905 ms
Strange. This works perfectlyfor me using macOS 11.5.1 -- is 15.1.1 your actual version or a typo?
Can you please update to the latest version (go get -u github.com/go-ping/ping/...) and then try again? If that fails please could you try using --privileged?
Strange. This works perfectlyfor me using macOS 11.5.1 -- is 15.1.1 your actual version or a typo?
Can you please update to the latest version (go get -u github.com/go-ping/ping/...) and then try again? If that fails please could you try using --privileged?
Sorry it was a typo :-P. I'm on Big Sur 11.5.1.
So this is my further test as you suggest:
○ $GOPATH/bin/ping --privileged -c 10 <IP_ADDRESS>
PING <IP_ADDRESS> (<IP_ADDRESS>):
Failed to ping target host: listen ip4:icmp : socket: operation not permitted%
○ sudo $GOPATH/bin/ping -c 10 <IP_ADDRESS>
PING <IP_ADDRESS> (<IP_ADDRESS>):
^C
--- <IP_ADDRESS> ping statistics ---
7 packets transmitted, 0 packets received, 0 duplicates, 100% packet loss
round-trip min/avg/max/stddev = 0s/0s/0s/0s
○ sudo $GOPATH/bin/ping --privileged -c 10 <IP_ADDRESS>
Password:
PING <IP_ADDRESS> (<IP_ADDRESS>):
24 bytes from <IP_ADDRESS>: icmp_seq=0 time=181.035ms ttl=47
24 bytes from <IP_ADDRESS>: icmp_seq=1 time=175.065ms ttl=47
24 bytes from <IP_ADDRESS>: icmp_seq=2 time=190.877ms ttl=47
24 bytes from <IP_ADDRESS>: icmp_seq=3 time=174.28ms ttl=47
24 bytes from <IP_ADDRESS>: icmp_seq=4 time=177.18ms ttl=47
24 bytes from <IP_ADDRESS>: icmp_seq=5 time=156.328ms ttl=47
24 bytes from <IP_ADDRESS>: icmp_seq=6 time=190.286ms ttl=47
24 bytes from <IP_ADDRESS>: icmp_seq=7 time=172.787ms ttl=47
24 bytes from <IP_ADDRESS>: icmp_seq=8 time=161.163ms ttl=47
24 bytes from <IP_ADDRESS>: icmp_seq=9 time=175.175ms ttl=47
--- <IP_ADDRESS> ping statistics ---
10 packets transmitted, 10 packets received, 0 duplicates, 0% packet loss
round-trip min/avg/max/stddev = 156.328ms/175.417601ms/190.877ms/10.347596ms
It's clear that go-ping works only when sudo and --privileged are both applied.
After digging into my system configuration for a while, I found that deleting/deactivating the content filter provided by a corporation EDR software in System Preference -> Network fixes the issue.
So the next question is why vanilla ping has the magic power go to through the content filter? I checked the system builtin ping binary and it's not setuided as I have originally speculated.
○ ll /sbin/ping
-r-xr-xr-x 1 root wheel 169K Jan 1 2020 /sbin/ping
○ [ -u /sbin/ping ] && echo SUID-bit is set || echo SUID-bit is not set
SUID-bit is not set
OK I'm confused here. :-P
It turned out go-ping uses UDP instead of ICMP when --privileged flag is not provided (see code here). This explains why all packets timed out because EDR software tends to block UDP traffic.
The rest of question is not related to go-ping, but still interesting enough. What makes macOS vanilla ping so special that it can create a raw socket without privilege nor setuid? Really curious about this.
It turned out go-ping uses UDP instead of ICMP when --privileged flag is not provided
Yep, that's exactly right. We only ping using actual ICMP if the pinger.SetPrivileged(true) API method is called. On some operating systems this is nothing special but on most *NIX systems, it requires root or special capabilities. If we're unprivileged then we ping using UDP packets instead, although this isn't supported by all operating systems. This is well-documented in the README here.
EDR software tends to block UDP traffic
Cursed. Cursed. Cursed. Cursed. Cursed.
What makes macOS vanilla ping so special that it doesn't require privilege or getting setuided to create a raw socket?
With regards to macOS's builtin /sbin/ping, take a look at https://apple.stackexchange.com/a/312861. You learn something new everyday! 😄
|
GITHUB_ARCHIVE
|
Batch queries with one missing value lose other values
If I have a kapacitor batch query for two values and the first one is missing, the second one will also be missing. This seems like a major bug: Kapacitor queries from InfluxDB will unexpectedly fail to retrieve data!
Steps:
Insert InfluxDB data with values v1 missing and v2 present. INSERT meas,t1=missing v2=2
Set up a Kapacitor batch query asking for values v1 and v2. batch |query('SELECT v1, v2')
Expected result:
A value of v2=2 is retrieved.
Actual result:
No value is retrieved for v2.
Additional information:
If v2 is requested before v1, then the value v2=2 is retrieved. batch |query('SELECT v2, v1')
Here is a bash script that demonstrates the issue, including setting up an InfluxDB database and a Kapacitor task. I am running InfluxDB v1.1.1 and Kapacitor v1.1.1 on OSX (installed via homebrew).
#!/bin/bash
# Set up InfluxDB with value v1 missing for some measurements.
influxd &>/dev/null &
sleep 1
influx -execute 'CREATE DATABASE kapacitor_repro'
# Tag t1 marks whether v1 is present or missing.
influx -database kapacitor_repro -execute 'INSERT meas,t1=present v1=1,v2=2'
influx -database kapacitor_repro -execute 'INSERT meas,t1=missing v2=2'
influx -execute 'SELECT * FROM kapacitor_repro.autogen.meas'
# Set up Kapacitor with queries.
kapacitord &
sleep 1
# If v1 is present, there is no problem.
cat > working1_batch.tick <<END
batch
|query('''
SELECT t1, v1, v2
FROM kapacitor_repro.autogen.meas
WHERE t1 = 'present' ''')
.period(1d)
.every(1s)
|log()
.prefix('working1_batch_query')
|min('v2')
.as('minv2')
END
kapacitor define working1_batch -type batch -tick working1_batch.tick -dbrp kapacitor_repro.autogen
kapacitor enable working1_batch
# If v1 is missing, we fail to query data for v2!
cat > broken_batch.tick <<END
batch
|query('''
SELECT t1, v1, v2
FROM kapacitor_repro.autogen.meas
WHERE t1 = 'missing' ''')
.period(1d)
.every(1s)
|log()
.prefix('broken_batch_query')
|min('v2')
.as('minv2')
END
kapacitor define broken_batch -type batch -tick broken_batch.tick -dbrp kapacitor_repro.autogen
kapacitor enable broken_batch
# If we query in a different order, we get data for v2!
cat > working2_batch.tick <<END
batch
|query('''
SELECT t1, v2, v1
FROM kapacitor_repro.autogen.meas
WHERE t1 = 'missing' ''')
.period(1d)
.every(1s)
|log()
.prefix('working2_batch_query')
|min('v2')
.as('minv2')
END
kapacitor define working2_batch -type batch -tick working2_batch.tick -dbrp kapacitor_repro.autogen
kapacitor enable working2_batch
kapacitor list tasks
sleep 2
kapacitor list tasks
# Note this output:
#[broken_batch:log2] 2017/01/18 15:49:19 I! broken_batch_query {"name":"meas","tmax":"2017-01-18T15:49:19.341909221-08:00","points":[{"time":"2017-01-18T23:49:17.175909077Z","fields":{"t1":"missing"},"tags":null}]}
#
#[edge:broken_batch|log2->min3] 2017/01/18 15:49:19 I! aborting c: 1 e: 1
#[broken_batch:min3] 2017/01/18 15:49:19 E! invalid influxql func min with field v2: invalid field type: <nil>
#[task_master:main] 2017/01/18 15:49:19 E! This seems like a major bug: batch queries unexpectedly return nil when they should not. I tried to include detailed steps to reproduce it--have they been insufficient?
Thanks.
I have also seen this issue.
@leon-barrett This issue looks like it is a duplicate of https://github.com/influxdata/kapacitor/issues/1294 which was fixed in the 1.30 release. Can you confirm that this issue has been fixed?
Yeah, it seems to be fixed, thanks!
(Though based on incident numbers, I'd argue which of the two issues is the duplicate :) )
(Though based on incident numbers, I'd argue which of the two issues is the duplicate :) )
Agreed :), glad its working.
|
GITHUB_ARCHIVE
|
So the switch and the controller communicate using the openflow protocol. This book aims to provide insight into the openflow protocol and its fundamentals by walking step by step through the technology. Figure 3 shows the architecture of an openflow switch. First, the version field indicates the version of openflow which this message belongs. Openflow is the first standardized interface and the most commonly used protocol designed specifically for sdn. Installing required software mininetopenflowtutorial wiki. Part ii openflow introduction openflow protocol ip infusion proprietary and confidential. Anyone is free to copy, modify, publish, use, compile, sell, or distribute the original sqlite code, either in source code form or as a compiled binary, for any purpose, commercial or noncommercial, and by any means. Given this association with the openflow switch protocol, it is important for the reader to be familiar with that protocol or be able to refer to the openflow switch 1. It originally defined the communication protocol in sdn environments that enables the sdn controller to. Python openfaucet is a pure python implementation of the openflow 1. I think, there are 5 version of openflow protocol available 1. Openflowcontroller networkservices customservices packetforwarding packetforwarding packetforwarding packetforwarding packetforwarding userapplications apis.
Get acquainted with the openflow network communications protocol. Sdn and openflow a tutorial ip infusion proprietary and confidential. Openflow paves the way for an open, centrally programmable structure, thereby accelerating the effectiveness of softwaredefined networking. Oct 24, 2017 openflow paves the way for an open, centrally programmable structure, thereby accelerating the effectiveness of softwaredefined networking. With openflow, the packetmoving decisions are centralized, so that the network can be programmed independently of the individual switches and data center gear. Openflow is a layer 2 communications protocol that gives access to the forwarding plane of a network switch or router over the network, as shown in fig. Openflow switch and the controller using connection setup and connection interruption procedures. An openflowenabled switch is called an openflow switch 7. So weve got our openflow controller using the openflow protocol to write flows to the openflow tables of switches. A protocol is defined for manipulating the switchs.
It is an open protocol for communication between controllers and switches. The openflow network architecture consists of three layers. The switch processes packets using a combination of packet contents and switch configuration state. Overview openflow, an instance of the sdn architecture, is a set of specifications maintained by the open networking forum onf. Instructor so its no good just showing youthis document, so i want to show you practicallyhow to capture openflow messages,and theres no better way than using wireshark. Software defined networking sdn architecture and role of. An openflow enabled switch is called an openflow switch 7. The openflow protocol standard is evolving quickly with release 1.
Just as the previous sections presented standards and proposals which were precursors to sdn, seeing sdn through a gestation period, then the arrival of openflow is the point at which sdn was actually born. Software defined networking sdn architecture and role. Open vswitch openflow tracks support for openflow 1. The aim of this book is to help you implement openflow concepts and improve softwaredefined networking on your projects. This website contains resources for many of the openflow related projects being worked on at stanford university. Link between sdn control and infrastructure layers ofenabled infrastructure communicates with an of controllervia the of protocol. Openflow technical specifications onf sdn technical library. Openflow, an instance of the sdn architecture, is a set of specifications maintained by the open networking forum onf. Sdn, whose southbound interface may be implemented by the openflow protocol. Introduction to software defined networks sdn article pdf available. In the first method im using vmware esxiand ive got a windows vm with wireshark installed.
In december 2011, the onf board approved openflow version 1. However, if you know the tcp port used see above, you can filter on that one. Beacon, as shown in figure1, provides a framework for controlling network devices using the openflow protocol, and a set of builtin applications that provide commonly needed control plane functionality. Aug 25, 20 openflow of is considered one of the first softwaredefined networking sdn standards. Introduction to software defined networking openflow. After you have downloaded the appropriate software and vm images, make sure that each column item x server, virtualization software, and ssh terminal is installed and working for your platform, and that the vm image loads and runs correctly for your configuration. The openflow specification describes the basic components and functions of an openflow logical switch ofls along with the openflow switch protocol used to remotely manage it from an openflow controller. Software defined networking sdn architecture and role of openflow in our previous article, we had a good overview of sdn as a technology, why its needed, and how it industry is adopting it. Oct 29, 2018 an openflow switch consists of three parts. Offlowadd by t tak here are the examples of the java api class jectfloodlight.
Openflow is a protocol specification that describes the communication between openflow switches and an openflow controller. Sdn is a network architecture that allows network administrators to control traffic from a centralized controller. Now im gonna show you two different ways to do this. Openflow is a protocol that allows a server to tell network switches where to send packets. Softwaredefined networking with openflow second edition. Alliedware plus sw itches can also be us ed with thirdparty sdn controllers, such as faucet, that support version 1. Book is a great way to learn about openflow protocol. Here is a list of links to key openflow resource materials. The openflow pipeline of every openflow switch contains multiple flow tables, each flow table containing multiple flow entries. Control path openflow any host openflow controller openflow protocol different openflow modes switches in pure of mode are acting as one datapath hybrid vlan switches are one datapath per vlan hybrid port switches are two datapaths one of and one nonof openflow enabled devices are usually referred to as. Openflow version independent classes and functions. The openflow protocol is implemented on both sides of the interface that exists between the control layer and interface layer. For details of individual matches, please refer to the openflow specification. The openflow protocol 11 message types from 3 section 4.
An introduction to software defined networking and openflow. However, this extension only applies modi cations on a set of openflow switches in a synchronized manner. This website contains resources for many of the openflowrelated projects being worked on at stanford university. Part ii openflow introduction openflow protocol ip infusion proprietary and confidential, released under customer nda. The openflow protocol uses the concept of flows to identify network traffic based upon predefined match rules that can be dynamically or even statically programmed by the software defined network control software. A controller is an application that manages flow control in an sdn environment. Openflow is maintained by the open networking foundation onf and came about in 2011. Openflow is a web based, publishing workflow management system aimed at magazine production, controlling editorial planning thru imposition. Click download or read online button to get software defined networking sdn anatomy of openflow book now. Second, the length field indicates where this message will end in the byte stream starting from the first byte of the header. Capture only the openflow traffic over the default port 6633 or 6653.
Ses controller and openflow protocol allied telesis. There is also an available issue tracker, and discussion forum. Installing required software mininetopenflowtutorial. Every openflow message begins with the same header structure.
Openflow sdn protocol flaw affects all versions, could. The openflow protocol can thus be viewed as providing the syntax notation for programming a packet processing pipeline. The basics of sdn and the openflow network architecture. Openfaucet can be used to implement both switches and controllers in python. You cannot directly filter openflow protocols while capturing. This fixed structure serves three roles that are independent of the version of openflow being used. If youre looking for a free download links of software defined networking with openflow pdf, epub, docx and torrent then this site is not for you. In a conventional network, each switch has proprietary software that tells it what to do. Openflow is the protocol to sendreceive forwarding rules from controller to switches control data. This section gives a brief description of the match field in openflow 1.
Wireshark openflow capture on windows linkedin learning. Free sdn tutorial introduction to sdn and openflow udemy. There are no mentions to time except in the\time scheduled bundles extension12. It is designed to be a vendorneutral protocol for managing packet movement between switches and building. Software defined networking sdn anatomy of openflow. At the core of the specifications is a definition of an abstract packet processing machine, called a switch. Explore the theory behind openflow, dive into the details of openflow messages, discover what happens when an sdn controller fails. This site is like a library, use search box in the widget to get.1502 555 1088 349 254 693 343 198 1089 1213 1269 271 16 1296 5 720 297 687 1387 348 1238 1181 1445 948 703 660 1119 1413 880 157 1278 1117 24 1409 440 122 549 487 1321 389 1376 532
|
OPCFW_CODE
|
Please use minimal netinstall iso (~270mb) when installing Debian.
Boot using installation media and follow on-screen instructions (partitions, locales, sudo and so on). When graphical installation program asks for ”Software selection”, do not check ”Debian Graphical Desktop Environment”. Instead uncheck all boxes, and click ”Next”:
When installation is complete, remove your installation media and reboot system. After that, login to command-line interface:
First thing to do, is add your brand new username to sudo group. To do so, add new line to /etc/sudoers -configuration file using ’visudo’ command (do not edit file directly using text editor!):
YOURUSERNAME ALL(ALL:ALL) ALL
I personally like to have packages that are non-free (non-free), and free ones that depends on non-free ones (contrib). Thus, i add them into my repository list:
$ sudo vi /etc/apt/sources.list
deb http://http.debian.net/debian/ wheezy main non-free contrib
deb-src http://http.debian.net/debian/ wheezy main non-free contrib
Next, install X-server and awesome window manager:
$ sudo apt-get update
$ sudo apt-get install xorg awesome
I also wanted sound (alsa), better text editor (vim) & terminal emulator (urxvt):
$ sudo apt-get install alsa-utils vim rxvt-unicode-256color
By default sound is muted, so i ran alsamixer and unmuted master channel using ’m’ on keyboard:
My laptops built-in speaker make annoying noises when errors occur (eg. trying tab-complete when there is no matches). So i muted it by unloading and then blacklisting ’pcspkr’ kernel module:
$ sudo modprobe -r pcspkr
$ sudo vi /etc/modprobe.d/nobeep.conf
# Do not load 'pcspkr' module on boot
Next, copy example awesome wm config file to your home directory:
$ mkdir -p ~/.config/awesome/
$ sudo cp /etc/xdg/awesome/rc.lua ~/.config/awesome/
$ sudo chown YOURUSERNAME ~/.config/awesome/rc.lua
One thing you might wanna change there, is the terminal emulator you like to use (default is xterm, i prefer urxvt):
$ vi ~/.config/awesome/rc.lua
-- This is used later as the default terminal and editor to run.
terminal = "/usr/bin/urxvt"
Next, lets make X-server automatically start on console login, by adding few lines to our ~/.bash_profile (~/.zprofile if you’re using zsh):
# startx automatically
if [ -z "$DISPLAY" ] && [ $(tty) = /dev/tty1 ]; then
.. now that X-server starts automatically on login, lets also make it start awesome wm as part of startup process:
# This file is sourced when running startx and
# other programs which call xinit
# Start the window manager
Reboot your system, login, and awesome wm should start automatically:
Everything worked out of box at least for me; evdev detected my cordless USB mouse automatically, and xorg-package came with proper graphics drivers, set of fonts and all that. Of course, you will need to install more programs that you need, such as: web browser, office suite, video player etc. But other than that, everything is ready!
If you come up with any problems, you should check these Debian wiki sites first: GraphicsCard, ALSA, WiFi/HowTouse, Xorg, Fonts.
Now you have quite minimal Debian-system up and running! Have fun 🙂
|
OPCFW_CODE
|
What I'm looking for is a way to merge the contents of two md5 files (let's call them
b.md5) in an automated fashion. Ideally I would like to do this from a bash script, but I am willing to explore alternatives.
a.md5 by doing:
cd a && find . -type f -print0 | xargs -0 md5sum > ../a.md5
a has a number of files and directories within.
From another folder
b, I similarly generate
Here is a snippet of what the contents of the .md5 files will look like:
8f56e29ec16b2d59949c4a95b5607574 ./usr/share/man/man1/infocmp.1.gz f245d527f4dd1fabab719b64414dccf7 ./usr/share/man/man1/clear.1.gz c0ae88d29fc406c937c3f64511fa1ab0 ./usr/share/man/man1/modeline2fb.1 3b83017b7acd38a553c3132a0ccb1fd8 ./usr/share/man/man1/fbset.1 83530bf6b1a19ca69022536e7ca810b5 ./usr/share/man/man1/sqlite3.1
At a later time, folder
a will have new files added to it (such as log files), and then be overwritten with folder
b, so all unique files of folders
b are present, and for all collisions, the file from
a is replaced by the file from
Similarly, I would like to merge the contents of
b.md5 so that in any collisions the
b.md5 value replaces the
a.md5 one for a particular file; however, since there are files added prior to the merge that I don't want in the results, I cannot simply recompute a new md5 file.
As a file note to give some context to the above needs,
b are each the contents of embedded linux filesystems; the contents of
a are programmed onto a clean file system, and the contents of
b are unpacked into the filesystem at run-time. The goal of the md5 is to verify the contents have been deployed without error, and ignore the files that are generated by various things at run-time. I will be generating the md5 on my PC, and doing the
md5sum -c on the embedded system.
As stated above, a bash script would be ideal, but I'm open to other suggestions as long as the process can be automated.
|
OPCFW_CODE
|
Azure data factory pipeline showing RequestingConsent forever
I am unable to fix the "Requesting Consent" status for an azure Data Factory Pipeline querying some Office365 (Graph) simple data (i.e. smtp addresses and UPN of my colleagues).
Can you suggest me something to check ?
I am adding 2 pictures showing where "Graph Data Connect" is easily enabled, and the always empty PAM (Privileged Access Management) portal.
New image: Graph Data Connect configurator
New image: Empty PAM portal
As per the error we could see its a permission issue where you need to be Granted Permission before querying in Graph to pass simple data (i.e. smtp addresses and UPN of my colleagues).
Here, are the steps how you can add permissions:
You have to create a API permissions service, you have to Grant Permission for reporting API, must allow your app the appropriate
permissions based on the API you wish to access.
Next you could navigate to API Permission in the left column under the Manage.
Then you can click on +Add Permission as shown in bubbles in the Snip.
Please grant the permissions Directory.ReadWrite.All and Users.ReadWrite.All.
Hi Ipsita. Thak you for your hints. I tried to assign these additive permissions, but the pipeline still waits in "Requesting Consent" and nothing is listed in "Privileged access requests" page :-(
Application log shows it signs-in correctly, but I still get no requests to approve :-(
Hi @Julian, Have you provided the Storage Blob Data Contributor Role.https://learn.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal?tabs=current, please lemme know if it works.
Hi @IpsitaDash-MT. I checked your link. Yes, I already gave to my registered application not only the "Storage Blob Data Contributor" Role on the whole storage account, but also upgraded it to owner of the resource group. If I am not wrong, prior to assign Contributor role, the linked service to the blob data storage didn't pass the "test connection" check.
I suppose you meant this kind of procedure.
Unfortunately, this also dosen't fix the problem :-(
At last I found what was missing: it was a licensing requirement, but nothing warned me about this in PAM page. Simply nothing was listed in it.
If you like, here are the requirements nowadays.
Have a nice day to everyone !
Julian
Thank you so much @Julian for the update as I was checking what else was missing but wasn't sure enough.
It depends from the error you get. Tell me the error, and I can try to help you.
I found another blocking "little" thing: Exchange Online "data location at rest" MUST BE the same of your azure resource, or you will get errors like "Graph unaccessible".
Hope this can help also
|
STACK_EXCHANGE
|
Updated March 28, 2023
What is Linker?
Suppose you are leading a project, and the project is divided into multiple parts on the basis of resources available and the expertise of each individual. Now, each of these small portions of the project is equivalent to a small project in itself, each having their own set of inputs and outputs. But for you to publish the entire project as a single product needs a way to combine all these pieces of the puzzle in such a way that we make a single product. In computing terms, this is what linking is, and the component which does that is the linker. So, combining all the multiple files to a single executable is what linking is. The program that takes the files generated by the compiler to make it into one executable is called a linker.
Why do we Use Linker?
Now, before we jump onto understanding the use of a linker, we need to be well versed with a few terminologies so that one will be able to appreciate the use of a linker. At first, we will look at terminology Symbols. In the source code, all the variables and functions are referred to by names. For example, suppose you declare a variable int x. In that case, we are essentially informing the compiler to set aside some memory for memory required by int. from here on, anywhere I reference “x,” I will be referring to the memory location where this variable is stored. Hence, if we mention a variable or a function, it is nothing more than a symbol to program. For us, it is easy to understand symbols, but for the processor, it means nothing more than a specific command it should execute. This command should be interpreted in machine language from the programming language a user write. For example, if we use x+=8, the compiler should convert this to something like “increase the value at the memory of “x” by 8 units”.
The next term is symbol visibility. This is more often used if 2 or more function needs to share a variable. For this, we would take the help of symbols that would have specific meaning for a programming language. For example, in C, we can use the symbol extern to reference a variable in another file, and the extern symbol will make sure to reference it to the variable in another file. Hence, increasing the visibility of the variable, which is declared only in one file.
The entire linking process is carried out in two parts; the first one is collecting all the files into a single executable file, i.e., to read all the symbol definitions and note the unresolved symbols, and then in the next step, go through each of the unresolved symbols and fix them up to right places. Post this process; there should be no unresolved symbol, or else the linker will fail. And this process of inking is known as Static Linking. This is because all these checks are performed during the compilation of the source program. An advanced concept is of dynamic linking, where variables and symbols are linked at the run time. The executable image would consist of an additional file of a sharable library.
Now let us understand the necessity of using linkers. If you are working on a huge project consisting of millions of code lines and due to customer requirements, you would have to change only a portion of a file and then compile the code again. Don’t you think, if we compile all the millions of lines of code again, it will result in unnecessary time loss. In current modern optimizing, compilers perform high code analysis and memory allocation and can take even longer. This is where linkers would come into play as most of the third-party libraries would be affected rarely, and the files which are affected by code change are less as well. Hence, when the compilation of the code is performed, the compiler creates an object file. With each file change, only the corresponding object file gets updated. Now, the job of the linker is to gather these object files and use them to generate the final executable. Here the third-party libraries are used via shared libraries.
Now, in recent times we have seen that not all OS would create a single executable. Rather it prefers to run on components that keep the functions together. For example, Windows uses DLLs for the task. This helps in reducing the size of executable, but in hand increases the dependency on the existing DLLs. DOS uses.OVL files.
Importance of Linker
In recent times, with the compiler evolving constantly, we have begun writing more optimized code every other day. Though linkers come at the very end of gluing together the object files, one would have a notion that the technology hasn’t changed much. But with recent advancements, linkers have even higher importance.
With Linker hardening, linkers will be able to remap non-dynamic sections of relocation to read-only. Having it read-only would improve things in running the code in an optimized way. Another importance is that linkers can use intermediate representation in the object files to identify sites where inlining may prove beneficial and thus help link-time optimization.
With the discussions above, we have an idea of what advantages linkers bring in place for developers, and here we would formally put them in points.
- There need not be any duplication of the library required, as the frequently used libraries will be available in one location. An example of such an advantage is having standard system libraries.
- Change of libraries used dynamically by various other codes will easily get corrected or upgraded by the change of the library in a single space. However, the ones linked by static linking would have to manually re-link again.
- With the use of static linking, we can avoid having “DLL hell,” which is a problem when the code is working with DLLs running on a single memory space.
In conclusion, the crux of the entire article of linkers is that it is an inevitable part of the process of compilation of code. And with the recent developments, we have seen the level of optimization of the code has increased multi-fold. Linkers are pieces in the chessboard, which glue the different source files into a single unified program to solve a business problem!
This is a guide to What is Linker?. Here we also discuss the introduction and why we use linker? Along with importance and advantages. You may also have a look at the following articles to learn more –
|
OPCFW_CODE
|
I was looking around for some details on the xml node available for an installable DNN module labeled compatibleversions. I was able to find a forum post in our private team forums where Shaun talks about this (from back in November) and I figured I would share with the community. This node can be placed in the .dnn manifest file, used by the DotNetNuke module installer, and can set the minimum required DotNetNuke core version a module requires.
For example, the next major forum release requires DotNetNuke core 4.4.0 or greater. In order to handle the potential support issues which would arise from users trying to install in DotNetNuke 4.3.5, for example, this node allows me to specify in the .dnn manifest file the lowest compatible core DotNetNuke version my module requires.
To use this in my module, I structured my .dnn manifest file as such:
<?xml version="1.0" encoding="utf-8" ?>
<dotnetnuke version="3.0" type="Module">
<description>The core forum module for DotNetNuke.</description>
As you can see above, I have higlighted the area that we are discussing here (I also didn't post the remainding section of the example .dnn manifest file). This node uses a regular expression to set the minimum DotNetNuke core version. You can test for a proper match at http://www.regextester.com/. The above example requires DotNetNuke 4.4.0 or greater. (Remember that DotNetNuke uses the xx.xx.xx format)
I believe this support has existed since DotNetNuke 3.3.6/4.3.6. (If not these, I know the x.x.7 series has this capability in there) If you add this node and someone tries to install in a verion prior to those, the check will not be done but the module install will fail because the schema for the manifest file in those versions had no idea what compatibleversions was. If someone tried to install the example forum in a DotNetNuke core 4.3.7, a message would be displayed about the core version being incompatible. If the version is compatible, however, the install process will complete as before.
Just a note, if you wanted to support a module on both 3.x and 4.x you can do something like:
This means that 03.03.00 - 09.03.99 will pass and 02.02.02, 03.01.00 would fail.
Hope this comes in handy!
|
OPCFW_CODE
|
const parse = require('../src/template/parse')
function getParseResult(content) {
const startStack = []
const endStack = []
const textStack = []
parse(content, {
start(tagName, attrs, unary) {
startStack.push({tagName, attrs, unary})
},
end(tagName) {
endStack.push(tagName)
},
text(content) {
content = content.trim()
if (content) textStack.push(content)
},
})
return {startStack, endStack, textStack}
}
test('parse template', () => {
let res = getParseResult('<div><slot/></div>')
expect(res.startStack).toEqual([{tagName: 'div', attrs: [], unary: false}, {tagName: 'slot', attrs: [], unary: true}])
expect(res.endStack).toEqual(['div'])
expect(res.textStack).toEqual([])
res = getParseResult(`
<div><slot/></div>
<div id="a" class="xx">123123</div>
<input id="b" type="checkbox" checked url=""/>
<div>
<ul>
<li><span>123</span></li>
<li><span>321</span></li>
<li><span>567</span></li>
</ul>
</div>
`)
expect(res.startStack).toEqual([
{tagName: 'div', attrs: [], unary: false},
{tagName: 'slot', attrs: [], unary: true},
{tagName: 'div', attrs: [{name: 'id', value: 'a'}, {name: 'class', value: 'xx'}], unary: false},
{tagName: 'input', attrs: [{name: 'id', value: 'b'}, {name: 'type', value: 'checkbox'}, {name: 'checked', value: true}, {name: 'url', value: ''}], unary: true},
{tagName: 'div', attrs: [], unary: false},
{tagName: 'ul', attrs: [], unary: false},
{tagName: 'li', attrs: [], unary: false},
{tagName: 'span', attrs: [], unary: false},
{tagName: 'li', attrs: [], unary: false},
{tagName: 'span', attrs: [], unary: false},
{tagName: 'li', attrs: [], unary: false},
{tagName: 'span', attrs: [], unary: false}
])
expect(res.endStack).toEqual(['div', 'div', 'span', 'li', 'span', 'li', 'span', 'li', 'ul', 'div'])
expect(res.textStack).toEqual(['123123', '123', '321', '567'])
res = getParseResult('<div><span>123</div>')
expect(res.startStack).toEqual([
{tagName: 'div', attrs: [], unary: false},
{tagName: 'span', attrs: [], unary: false}
])
expect(res.endStack).toEqual(['span', 'div'])
expect(res.textStack).toEqual(['123'])
res = getParseResult('<div>123</h1>')
expect(res.startStack).toEqual([
{tagName: 'div', attrs: [], unary: false}
])
expect(res.endStack).toEqual(['div'])
expect(res.textStack).toEqual(['123'])
})
test('parse wxs', () => {
let res = getParseResult(`
<div>123</div>
<wxs module="m1">
var msg = "hello world";
module.exports.message = msg;
</wxs>
<view>{{m1.message}}</view>
<div>321</div>
`)
expect(res.startStack).toEqual([
{tagName: 'div', attrs: [], unary: false},
{tagName: 'wxs', attrs: [{name: 'module', value: 'm1'}], unary: false},
{tagName: 'view', attrs: [], unary: false},
{tagName: 'div', attrs: [], unary: false}
])
expect(res.endStack).toEqual(['div', 'wxs', 'view', 'div'])
expect(res.textStack).toEqual(['123', 'var msg = "hello world";\n module.exports.message = msg;', '{{m1.message}}', '321'])
res = getParseResult('<wxs></wxs>')
expect(res.startStack).toEqual([
{tagName: 'wxs', attrs: [], unary: false}
])
expect(res.endStack).toEqual(['wxs'])
expect(res.textStack).toEqual([])
})
test('parse comment', () => {
const res = getParseResult('<!-- 123 -->')
expect(res.startStack).toEqual([])
expect(res.startStack).toEqual([])
expect(res.startStack).toEqual([])
})
test('parse without options', () => {
let catchErr = null
try {
parse('<div>123</div>')
} catch (err) {
catchErr = err
}
expect(catchErr).toBe(null)
})
test('parse error', () => {
function getErr(str) {
let catchErr = null
try {
getParseResult(str)
} catch (err) {
catchErr = err
}
return catchErr && catchErr.message || ''
}
expect(getErr('<div')).toBe('parse error: <div')
expect(getErr('<wxs>123')).toBe('parse error: 123')
expect(getErr('<!-- 123')).toBe('parse error: <!-- 123')
expect(getErr('<div>123</%%^6.....>')).toBe('parse error: </%%^6.....>')
})
|
STACK_EDU
|
Why can a single sine wave signal be used to send digital data but a composite signal is needed to send human talk,etc.?
I can understand the following text (Data Communications and Networking: 4th Edition, Berhouz Forouzan, Ch.5, page 179) which says that a property of a single sine wave carrier signal (phase, frequency or amplitude) can be changed to represent the pattern in digital data:
But I fail to understand why something like human voice (as in a telephonic conversation) can't be similarly mapped onto a single sine wave signal by changing one of the characteristics of the wave (Frequency for example). Doesn't at any instant of time, human voice has a particular frequency and amplitude. Why can't that be represented by modulating a single sine wave? I am asking this because the same book says that in order to transfer human voice etc, we need a composite signal having many constituent sine waves of many frequencies:
Please explain this to me in simple terms why it is so. What's different between transmitting something like human voice on one hand and a digital data pattern on the other?And what are other "stuff" like human talk which necessitates use of a composite signal?
NB: I will appreciate if you can also tell IF human talk can be sampled, converted to a digital pattern, and THEN transmitted over a SINGLE sine wave signal. Thank you.
@WillDean :-) I am afraid bro that the author is a renowned professor from De Anza college. Leaving that aside, can one conclude that if once we alter any attribute of a single and simple carrier sine wave, it becomes a composite signal?
You realize that De Anza College is a community college, right? A professor there could be a great teacher who just isn't interested in research, or they could be not qualified to teach at a 4-year school.
Sine waves of higher, audible frequencies do sound like a buzz @willdean
There are several confusions going on here. I can see what the text you cite is trying to say, but also how it can be easily misinterpreted.
The first section is talking about how to modulate a single sine wave (let's call that the "carrier"), to carry another signal. In the text's example, this other signal is digital, but it doesn't need to be.
AM radio is a great example of modulating a carrier using amplitude to carry a audio signal. FM radio is the same except it modulates the frequency. Phase modulation is also used elsewhere, so that part of the first quote is all true.
The misleading part is giving the impression the result is still just a "single sine wave". It's not. As soon as you change something about a sine wave, you no longer have a single sine wave. This may sound unintuitive, but a AM radio carrier of 1 MHz modulated with a 1 kHz audio signal is actually the combination of three sine waves, at 999 kHz, 1.000 MHz, and 1.001 MHz. Getting into why that is true is beyond the scope of this answer. You'll either have to learn a bunch of Fourier analysis or trust me on this.
The second part correctly points out that a true "single sine wave" can't carry any dynamic information. This is again part of the semantics of "single sine". A true single sine doesn't vary in frequency, amplitude, or phase. If it did, you can show by Fourier analysis that it's not really a single sine anymore, just like the AM carrier modulated with 1 kHz wasn't a single sine anymore.
Basically, a periodically changing sine wave can be mathematically decomposed into a set of separate single sines, each with their own amplitude, frequency, and phase. There is therefore no such thing as a changing single sine. This is why a true single sine doesn't carry any dynamic information.
This was my first question on this forum and I appreciate that you took the time to answer a question that you must have found trivial. You have clearly delineated beyond which part of the answer one needs stuff like Fourier analysis to grasp it all. Thanks for your time sir.
Whoever downvoted this: It would be useful to know what you think is wrong, misleading, or badly written. Silent downvotes without obvious cause do this site a disservice.
You can't send data or voice over a SINGLE sine wave signal. You have to modulate it by changing the frequency or amplitude (or phase).
A single sine wave contains a single frequency and amplitude that doesn't vary with time, correct? In the frequency domain it's a single line with no width.
Therefore you have 2 pieces of information that don't vary with time. Voice and data must vary with time to transmit information.
By modulating the sine wave's amplitude or frequency or phase with time you can transmit that information. But at that point it is no longer a single sine wave, it's a time varying composite of the information you are trying to send with the "carrier" sine wave.
So no, you can't sample human voice and send it over a single sine wave. Of course you could use a single sine wave as a carrier and modulate it however you want to carry the digital data, but then it's no longer a single sine wave.
Referring to your first two lines, does it mean that once we modulate a single sine wave signal, it becomes a composite signal(theoretically one composed of many sine waves)? I am not from an electrical/electronics engineering domain and hence missing out the finer details.
Thanks a lot, you have already answered the above. I have understood what you said. Thanks a ton.
As other answers said, neither voice nor digital data can be sent over a "single sine wave".
Either one can be transmitted over a modulated sine wave.
I fail to understand why something like human voice (as in a telephonic conversation) can't be similarly mapped onto a single sine wave signal by changing one of the characteristics of the wave (Frequency for example).
Of course a voice can be transmitted by frequency modulation. Whenever you listen to FM radio, that's exactly how the announcers' voices are transmitted to you. Whenever you made a long-distance telephone call before 1980 or so, it was likely your voice was transmitted this way also.
will appreciate if you can also tell IF human talk can be sampled, converted to a digital pattern, and THEN transmitted over a SINGLE sine wave signal.
Yes, this is also possible. For example, compact discs store sounds including voices in digital form, and when they are read back the pattern of bits on the disc is used to modulate a laser beam (an example of an electromagnetic sine wave) before they are converted back to audio signals. Also, whenever you make a long-distance telephone call nowadays, your voice is almost certainly digitized and modulated onto a carrier (and combined with 1000's of other voice signals) to be transmitted over the trunk lines.
Thanks for taking the time to give a detailed answer. Further clarified my understanding.
Consider a single sine wave: as already noted it is a single line in the frequency domain.
Now we will add some information. That could be voice, digital data - anything. This information will usually (always in practice) have some bandwidth, but the instantaneous signal will be at some frequency f(x) at some amplitude A(x).
In amplitude modulation (because it is the simplest), we will have, at any instant, a composite signal of f(carrier) +/- f(information). I am not going to derive that here.
As this information signal varies with time, we will get f(carrier) +/- f(information) where the information is a band of signals, when viewed over time.
So if we start with a simple sinusoid (the carrier) and modulate with some complex information signal H(s), we end up with f(carrier) modified by H(s) in the frequency domain.
The simple sinusoid carries no information and this might be key to understanding the issue. The modulated signal contains a known signal - the carrier - (so we can find it in the frequency domain) that is carrying an information signal.
So: the simple sine does not carry any information except where to find it in the frequency domain. We 'piggyback' the information onto it.
The original sinusoid still exists; we have added information to it.
Note: The use of the term information is deliberate, and as further reading for the OP, the definition of information is indistinguishable from that of noise.
All the other answers here are accurate - I am simply trying to answer from a different perspective.
Human speech and music for example, are made up of several sine waves mixed together. In the case of voice, this can be sampled, at a minimum of twice the highest frequency of speech (typical sample rate of 8 kHz for a analog telephone line, see (Nyquist-Shannon sampling theorem). The amount of bits of each sample is usually minimum of 8. These bits represent the amplitude of the signal. This scheme is called pulse code modulation (PCM).
This diagram shows what sampling might look like using 3 bits per channel:
which would not be enough to provide intelligible speech, but shows the idea.
The combination of 8 kHz sampling and 8 bits per sample, means the bandwidth required is 64 kHz.
I have written code that takes 8-bit PCM signals off of a SD card in blocks of 512 bytes at a time, and plays them back using a 125 µs interrupt (corresponding to 8 kHz), to play back voice messages in an embedded system. Running the two tasks (reading the SD card and playing back the samples in the interrupt routine) simultaneously) pretty much maxed out the 8051 I was using.
8 kHz works fine for voice, but is not fast enough for music. CD's use a sampling rate of 44.1 kHz (roughly twice the highest frequency in recorded music of 20 kHz), using 16-bit samples. The rather strange sampling frequency of 44.1 kHz was chosen because it is compatible with both NTSC and PAL video systems.
These digital samples are then used to modulate a sine wave carrier frequency, in one of several ways:
(a) is the digital signal
(b) is amplitude modulate (AM)
(c) is frequency module (FM)
(d) is phase modulation (PM)
It is clear that the end result is no longer a simple sine wave of one frequency.
Can you please elaborate a little more, especially the last line (I think that holds the core of the answer). Can we please look into the first line of "John D" answer below
@Meathead I added additional info to my answer.
Whomever downvoted this, it would be nice to know why.
|
STACK_EXCHANGE
|
What is your recommended solution for our Customer Service mobile app? We have a live chat that caters all the concerns of our customers but the problem is, we just have a limited number of agents available so definitely, some customers need to wait. What do you think is the best solution to inform them that the queue is quite long and they need to wait (in a way that they will not get mad and understand the situation)?
You should consider a few strategies for helping users to focus their energy (attention and possibly frustration):
Use a predictive system for the length of time until an agent might help them.
This can be something you build, or a SaaS service you 'rent', or an extension of your existing software. You can outsource the work for it to your chat vendor, or another software shop, unless that prediction is based on something unique to your business.
Remember not to build something that isn't your core product, if you can avoid it!
Offer an asynchronous option for agent follow-up.
Let them provide, email, phone, or other ways to get back with them when an agent is available. Some people like SMS. Giving them this option can increase trust in your company's ability to meet them at their needs. Don't break trust and turn it into a text-spam engine!
Get buy-in for things you want to followup about, to the async channel. If you're unsure of how to do it, start with a live person reaching them (using a iterate-able script, of course)
While you want all users / visitors to get the help they needs, you also want to help the people that help you most. Work with Sales, biz-dev, BI, or any other leadership to learn which kinds of users need to be kept the happiest, and which are the 'growth segments' for the company.
Find a mix of passive + active data, to pre-filter people into priority queues. Passive data can be site browsing or anything else prior to them starting a chat. Active data can be anything they're saying in the chat, to a live person or to a bot.
Remember to work with the team that handles your on-site user behavior tracking - e.g. via Google Analytics or whichever tool. Have a session label added when the user starts the chat session. From there, you can watch for events that occur — like browing away from the chat window to other parts of the site; or higher rate of pageviews than typical. These things can demonstrate frustration with the chat wait system, and a need for other means of engagement.
A chat bot can guide them to find possibly-related articles. If the bot has any whimsicalness programmed-in, then is can make users a little less stressed. Remember that affective-HCI tells us to follow a users emotion. So if they show the opposite affectation as a whimsical bot, make sure the bot can disable that affectation, so you don't lose the user!
Try to make the 'view' for help docs to be a modal that can expand/collapse while keeping the chat active, and ideally keeping the origination page 'under' the chat and help modals. Leave the 'source of the problem' can generate anxiety. Let them feel they can hang onto it.
Modal docs nav will give them something to do while waiting for live help to respond. They can get anxious or distrustful if they have to leave the chat flow to read a doc. It increases the chances that no one will come to help them. Or they might just leave if they cannot find something in docs!
Ideally this kind of hover-docs is synced with a robo-help. The live help chat can be like a search, which a bot listens to, and chime in until live help arrives.
You could have developers write a code that estimates how much time the queue will take and display that to the user. For that you will need to count the duration of each chat create an estimate chat duration multiplied with the number of users that are already in queue. This might be hard because some chats would take a minute and maybe some would take 20 minutes.
Or you could just display the number of users already in queue with a message like : "5 users in queue, please wait or use our contact form to get a reply in maximum 24 hours". It`s important giving the user an alternative to contact you in that moment if he does not want to wait in line.
|
OPCFW_CODE
|
What I mean is, how to we position things (camera and the objects we wish to draw) so that when we give something a size of 10 (f.e. a BoxGeometry with an X-axis dimension of 10) that this will mean the box is 10 screen pixels wide?
Another question might be: If we have a camera, how far from the camera on the camera’s Z axis is the plane in which a size of 1 on the X or Y axis corresponds to exactly 1 screen pixel? How might we figure this out?
So, for example, following is an example where the <div> element is positioned using top: 50%; left: 50%; transform: translate(-50%, -50%), which is using the DOM coordinate system where the point (0, 0, 0) starts at the top left of the viewport and positive Y goes downward. When you start dragging, you’ll see the teal square that was hiding underneath the pink square. Both of them are rotating in unison, and are perfectly aligned until you begin dragging the Three.js camera. The pink <div> has a size of 50px width/height, and the Three.js Mesh has a size of 50 width/height, so the Three.js Mesh is sized in pixels:
@prisoner849, the use case is, for example, what I described here (also listed above). Basically, to easily make traditional web content enhanced with WebGL.
The following is a sample that shows combination of DOM with WebGL using a combination of Three.js’ CSS3DRenderer and WebGLRenderer:
What you notice is that the squares are DOM elements (see them in the element inspector), the shadow of the WebGL sphere is cast onto the elements, and the moving lights also shines on these elements. If you run it a few times, you’ll notice that sometimes the sphere also intersects with the elements, as if both are in the same 3D world.
I’m making an HTML API to make it super easy (abstracting away “mixed mode” behind the HTML interface). In my case, I’m not going to be using the CSS3DRenderer, as I have my own CSS3D renderer and I will be mapping the WebGL objects (Three.js) to the DOM coordinate space (rather than mapping DOM elements into Three.js coordinate space like CSS3DRenderer does), which is why I opened this thread here. I’m going to post my full working solution when it’s ready over at How can we make Three.js scenes use DOM-style coordinates? (that thread includes an HTML snippet of what mixing DOM with WebGL will look like).
Here’s a sample scene without any “mixed mode” (only WebGL) because mixed mode won’t be ready for a few weeks:
I’ve been polishing up my lib. Up next I’ll have some practical examples (on a new website with new name and full documentation). Some of those example will include “web pages” with 3D content that interacts on the space of the page (f.e. 3D buttons with shading, characters that can jump over the buttons, etc, that main use case being 3D enhancement of traditional web content). Other use cases will include, for example, first person shooter where you can walk up to a computer, and interact with a real web page (or a web app made specifically for interacting with it in the game).
I’ve started to add more examples, but the examples like I described above are coming soon…
|
OPCFW_CODE
|
How do I change permissions on a Linux home directory?
To change directory permissions in Linux, use the following:
- chmod +rwx filename to add permissions.
- chmod -rwx directoryname to remove permissions.
- chmod +x filename to allow executable permissions.
- chmod -wx filename to take out write and executable permissions.
How do I restrict SFTP users home directory in Linux?
The simplest way to do this, is to create a chrooted jail environment for SFTP access. This method is same for all Unix/Linux operating systems. Using chrooted environment, we can restrict users either to their home directory or to a specific directory.
How do I stop other users from accessing my home directory Ubuntu?
Scroll down to the DIR_MODE command in the adduser. conf file. The number set is “0755” by default. Change it to reflect the different types of permissions (r, w, x) you want to grant to the different types of users (owner, group, world), such as “0750” or “0700” as discussed earlier.
How do I chroot a user to a directory?
Restrict SSH User Access to Certain Directory Using Chrooted Jail
- Step 1: Create SSH Chroot Jail. …
- Step 2: Setup Interactive Shell for SSH Chroot Jail. …
- Step 3: Create and Configure SSH User. …
- Step 4: Configure SSH to Use Chroot Jail. …
- Step 5: Testing SSH with Chroot Jail. …
- Create SSH User’s Home Directory and Add Linux Commands.
How do I restrict users in Linux?
However if you only want to allow the user to run several commands, here is a better solution:
- Change the user shell to restricted bash chsh -s /bin/rbash <username>
- Create a bin directory under the user home directory sudo mkdir /home/<username>/bin sudo chmod 755 /home/<username>/bin.
How do I change owner to root in Linux?
chown is tool for changing ownership. As root account is superuser type to change ownership to root you need to run chown command as superuser with sudo .
How do I change the default permissions in Linux?
To change the default permissions that are set when you create a file or directory within a session or with a script, use the umask command. The syntax is similar to that of chmod (above), but use the = operator to set the default permissions.
How do I FTP users to jail?
Set chroot jail to default $HOME directory for only a few of local users
- In VSFTP Server configuration file /etc/vsftpd/vsftpd.conf, set: …
- List users which required chroot jail in /etc/vsftpd/chroot_list, add users user01 and user02: …
- Restart vsftpd service on VSFTP Server:
How do I restrict FTP users to my home directory?
To restrict FTP users to a specific directory, you can set the ftpd. dir. restriction option to on; otherwise, to let FTP users access the entire storage system, you can set the ftpd. dir.
How do I see users in Linux?
How to List Users in Linux
- Get a List of All Users using the /etc/passwd File.
- Get a List of all Users using the getent Command.
- Check whether a user exists in the Linux system.
- System and Normal Users.
What does chmod 700 do?
chmod 700 file
Protects a file against any access from other users, while the issuing user still has full access.
Where is Ubuntu home directory on Windows?
Go inside the home folder, you can find your Ubuntu user account’s home folder. How can I access the Windows System Drive in Bash? In the Linux/Ubuntu Bash directory structure, the Windows 10 system drive and other connected drives are mounted and exposed in the /mnt/ directory.
|
OPCFW_CODE
|
Is it possible to send a gift anonymously or at least without them knowing beforehand?
I want to send someone a gift as I may or may not be involved in a secret santa on steam.
Ideally I don't want them to know I sent it at all (the secret part) but I guess steam will tell them who sent it. Failing that, I'd like them to not know it was me until the gift notification turns up in steam. I'll send a custom message so they know it's part of the secret santa.
I'm guessing people aren't going to want to post email addresses around, so is it possible to send a gift via steam without their email address and not adding them as a friend?
I have a link to their profile.
You could try making a new steam account and adding them from there.
@victoriah that usually doesn't work since new steam accounts dont' have any privileges unless it owns one(paid) game
I tried sending myself a gift just now - the email contains the sender's email address, indeed. What you can do is send a gift to your own secondary email address, and send them the URL for redeeming the gift (the URL you receive by emailing the gift to yourself) . Hopefully you have some way of contacting them other than their steam profile; for example, give them the url written on a slip of paper.
Caution: The URL contains the email you sent the gift to. If you don't want the receiver to be able to guess who you are, and the secondary email you're sending to is obviously related to you, create a new email account to send to. Example URL, email highlighted:
https://store.steampowered.com/account/ackgift/1F631467BB7E?redeemer=Fadeway%40yahoo.com
With this method, the first time they learn the sender's name will be when they enter the URL into a browser:
If there's no way to transmit the URL to them without posting on their steam profile, that's a lost cause, unless you get a third person to act as an arbiter/organizer who friends everybody and distributes the links while keeping it a secret who sent which one.
Probably too late for the person asking the question, but the third-party organizer is a good way of doing it.
Here's the article about gifting: https://support.steampowered.com/kb_article.php?ref=6262-QXCN-0755
Unfortunately, you can't send a gift anonymously.
No matter how you send it from your account, your recipient will see your Steam profile name and/or the e-mail address that your account was registered with. Even if your change your community profile name, they can look at the previous names you've used.
I get that I can't send it anonymously, but can I make it so the first they know about it is when it's received (not when the get a friend request)?
The information below was correct at the time of writing, unfortunately it is no longer true. Gifts purchased now can no longer be stored in inventory or re-gifted onwards.
If this is part of a secret santa then presumablly there is some kind of organiser who knows the pairings.
If the orgnaiser has a steam account then as far as I can tell you should be able to gift the game to them and they should be able to receive it into their inventory (NOT their library) and gift it onwards to the final receipiant.
Not possible for games, however it is possible to gift someone Steam Wallet credit using Steam Gift Cards or Wallet Codes.
See: https://store.steampowered.com/account/redeemwalletcode
You can use usendme.net You don't have to know all the details about the person but you do have to be able to tell them you have sent a gift. Otherwise you don't need to know anything about them - not even their name.
|
STACK_EXCHANGE
|
Finally iTunes is going to offer music downloads that will not be crippled by DRM. For a start only songs by artists of EMI will be available without DRM, but I think as soon as other labels see that there is still money to earn they will jump in, too. Or - at least I hope that most will. Universal probably won’t since ipod owners are thieves anyway
After typing several months of bank statements into Gnucash once again, I decided it was time for HBCI homebanking. The first step was to find an affordable smartcard reader that was supported by linux.
The Cherry ST2000U was available at Amazon for 40 Euro (approx. 50$) and was on the list of supported devices of the ccid driver.
Anyway the installation was not straight-foward and I did not find an existing howto for this so I want to give a short overview of what I did to get the reader running.
First I checked if I had all the correct USE flags set on my system. For HBCI homebanking the flag “hbci” is required. To use a crypto smartcard for gnupg the flag “smartcard” is also required.
I wanted to use the newest versions of all packages which are at the time of writing:
All of those packages are masked for x86, so the following lines have to be added to /etc/portage/package.keywords:
Now start the emerge by doing
emerge ccid libchipcard
First of all we need to copy the default configs to the correct places. For usb readers not special configuration is needed:
cp /etc/chipcard3/server/chipcardd3.conf.example /etc/chipcard3/server/chipcardd3.conf
cp /etc/chipcard3/client/chipcardc3.conf.example /etc/chipcard3/client/chipcard3c.conf
The next step is to copy the ccid_ifd driver to the drivers directory of libchipcard:
cp /usr/lib/readers/usb/ifd-ccid.bundle/Contents/Linux/libccid.so.1.3.0 /usr/lib/chipcard3/server/lowlevel
Now check if the driver is found. Running chipcardd3 addreader –dtype list should list a lot of drivers. Most of them will be marked with [not installed] but the very first “ccid_ifd” should not have this label.
If the driver was found we can start the chipcard server for the first time - without attaching the card reader!
After starting the daemon you may now attach the card reader. After a few moments chipcardd3 will print some debug lines while it detects the new hardware. The last line should look like this:
Device UsbRaw/046a/003e is not a known reader
Unfortunately the ccid driver lists the Cherry ST-2000U as a supported device but does not have it included in the config file. To change this open /usr/share/chipcard3/server/drivers/ccid_ifd.xml in an editor and look for the entry of the “Cherry ST-1044u”. The setup of the ST1044U and the ST2000U is identical, so we can simply copy that part and change the names and usb ids. Add the following lines right behind the ST-1044u entry:
<reader name=”ccid_cherry_st2000u” busType=”UsbRaw” addressType=”devicePath” devicePathTmpl=”usb:$(vendorId:04x)/$(productId:04x):libusb:$(busName):$(deviceName)” vendor=”0×046a” product=”0×003e” >
After saving, remove the card reader, restart the chipcard-daemon, attach the reader again and the output will show that the reader is detected and configured. As a last check you can run chipcard3-tool list. The output should look like this:
- auto1-ccid_cherry_st2000u (ccid_cherry_st2000u, port 0)
That’s it, the reader works now. First thing I did was inserting my Geldkarte and running geldkarte3 loaded to see if the amount was correct
While instant messaging has become one of the major communication tools besides email, security is almost zero. Skype is the only systems that boast with encryption - but nobody really knows what the skype code is up to.
Whenever other im systems like ICQ, AIM, yahoo messenger and msn are used the conversation can be spyed on with a simple packet sniffer. And perhaps even more important: the messaging service gets the cleartext of all conversation.
While some clients have encryption plugins using rsa or gpg encryption, most of these plugins like pidgin-encryption (formerly gaim-encryption) are limited to one im client. For a long time I had been looking for a solution that would work from my gnome pidgin/gaim to a friends kde kopete setup. A solution working cross-platform from windows to linux would be even better.
The Off-the-Record Messaging project aims to provide a solution by supplying a library that does all the encryption and signing without depending on a specific instant messaging client. Plugins for various clients connect the library to the requested platform. As far as I have found out there are plugins for kopete and pidgin/gaim. Mac OSX and trillian users can use a proxy for icq/aim.
Setting up the plugin for pidgin is straight-forward: in the plugin options we can create a keypair for each IM account. In the message window a new button beside the input box will appear. A single click tries to initiate a secure connection the the other side. If everything works correctly the button will change and the following messages will be encrypted.
Unfortunately I’m not a user of Claws, but this plugin should be included in every mail program: AttachWarner. The idea is not complex to solve, but still the idea is great: If the email contains typical phrases indicating an attachment (”please find the attached file”) and no attachment has been added, a warning will appear when you try to send the mail.
Right now evolution offers support for usenet newsgroups via nntp, but not support for rss. As newsgroup usage is going down and rss is _the_ news source for most internet savvy now, it is a good thing to see rss-support for evolution coming up. I wasn’t yet able to get the plugin compiled, but the screenshots on the project page look quite promising.
|
OPCFW_CODE
|
drx DynamoRIO Extension provides various utilities for instrumentation and sports a BSD license, as opposed to the
drutil Extension which also contains instrumentation utilities but uses an LGPL 2.1 license.
drx with your client simply include this line in your client's
That will automatically set up the include path and library dependence.
drx_init() function may be called multiple times; subsequent calls will be nops and will return
true for success. This allows a library to use
drx without coordinating with the client over who invokes
A common scenario with multi-process applications is for a parent process to directly kill child processes. This is problematic for most dynamic tools as this leaves no chance for each child process to output the results of its instrumentation. On Windows,
drx provides a feature called "soft
kills" to address this situation. When enabled, this feature monitors system calls that terminate child processes, whether directly or through job objects. When detected, it notifies the client, who is expected to then notify the target process via a nudge. Typically this nudge will perform instrumentation output and then terminate its process, allowing the parent to simply skip its own termination request. The nudge handler should normally handle multiple requests, as it is not uncommon for the parent to kill each child process through multiple mechanisms.
drx library also demonstrates a minimalistic buffer API. Its API is currently in flux. These buffers may contain traces of data gathered during instrumentation, such as memory traces, instruction traces, etc. Note that per-thread buffers are used for all implementations. There currently exist three types of buffers.
- Trace Buffer
- Circular Buffer
- Fast Circular Buffer
- Using the Buffer API
- Manually Modifying the Buffer
The trace buffer notifies the user when the buffer fills up and allows the client to write the contents to disk or to a pipe, etc. Note that writing multiple fields of a struct to the buffer runs the risk of the client being notified that the buffer is filled before the entire struct has been written. In order to circumvent this limitation, either write the element at the highest offset in the struct first, so that the user never sees an incompletely-written struct, or if this is not possible, allocate a buffer whose size is a multiple of the size of the struct.
This circular buffer will wrap around when it becomes full, and is used when a client might only need to remember the most recent portion of a sequence of events instead of recording an entire trace of events. This circular buffer can be any size, but is specially optimized for a buffer size of 65336.
The only special case mentioned in Circular Buffer is a buffer of size 65336. Because the buffer is this size exactly, we can align it to a 65336 byte boundary, and increment only the bottom two bytes of the base pointer. By this method we are able to wrap around on overflow.
Note that this buffer is very good for homogeneous writes, such as in the sample client
bbuf (see Sample Tools), where we only write
app_pc sized values. Since the buffer cannot be a different size, when using a structure it is a good idea to increment
buf_ptr to a size that evenly divides the size of the buffer.
There is a single API for modifying the buffer which is compatible with each of these buffer types. The user must generally load the buffer pointer into a register, perform some store operations relative to the register, and then finally update the buffer pointer to accommodate these stores. Using offsets for subsequent fields in a structure is the most efficient method, but please note the warning in Trace Buffer, where one should either allocate an integer multiple of the size of the struct, or always write the last field of a struct first.
It is possible to manually modify the buffer without calling drx_buf_insert_buf_store(). The provided store routines are for convenience only, to ensure that an app translation is set for each instruction. If a user writes to the buffer without using the provided operations, please make sure an app translation is set.
|
OPCFW_CODE
|
Collecting Data From Agilent 54622A Oscilloscopes
One of the issues we have in our electronics lab is how to collect the data off of our oscilloscopes, so I decided to put together this document to explain the procedure.
These oscilloscopes are connected to lab computers via Serial Port (COM 1). There is an application on these computers to collect the data. This application is "Agilent IntuiLink" under All Programs.
Start >> All Programs >> Windows Virtual PC >>Windows XP Mode Applications >> Agilent IntuiLink >> 54600 >> Data Capture for 54620 Series Oscilloscopes:
Be patient while the application is starting. You should see a panel just like the one in Figure 1.
Figure 1 - Agilent IntuiLink Data Capture Program
Once the application is open, from the menu go to "Instrument" and select "54620 Series": You may see a figure similar to Figure 8. If so, scroll down to that figure and continue from there.
Figure 2 - Set I/O
Click on "Find Instrument" and you will see Figure 3:
Figure 3 - Find Instrument
If COM1 is not selected, select it and click on "Identify Instrument(s)". All the default parameters should be ok, so click "OK". Figure 4 is what would appear on screen:
Figure 4 - Identify Instrument(s)
Select the instrument (Oscilloscope) and click "Connect". Once connected, click "OK" and again "OK" for the next screen. At this point, you should be able to see the trace of oscilloscope on your monitor. You could save your data including the X-axis in a ".csv" format to use in Excel. Here are screen shots of what you see after the one of Figure 4:
Figure 5 - Instrument Selected
Figure 6 - Instrument Connected
Figure 7 - Graph
Alternately, you may see the following: Here you can select the active Channel and Number of points.
Figure 8 - Trace Selection
Once you have made your selection, click OK. For best results, select 100 points. Figure 7 would be the result. It is suggested to select "Include X-axis data on save" as in Figure 9.
Figure 9 - Include X-axis data on save is selected.
Now, go to File >> Save As... and change the "Save as type" to "CSV (*.csv)" as in Figure 10.
Figure 10 - CSV type has been selected.
After saving your data, you could open your file in MS Excel and manipulate it however you want.
|
OPCFW_CODE
|
Converting duration to years in Java8 Date API?
I have a date in the far past.
I found out what the duration is between this date and now.
Now I would like to know - how much is this in years?
I came up withthis solution using Java8 API.
This is a monstrous solution, since I have to convert the duration to Days manually first, because there will be an UnsupportedTemporalTypeException otherwise - LocalDate.plus(SECONDS) is not supported for whatever reason.
Even if the compiler allows this call.
Is there a less verbous possibility to convert Duration to years?
LocalDate dateOne = LocalDate.of(1415, Month.JULY, 6);
Duration durationSinceGuss1 = Duration.between(LocalDateTime.of(dateOne, LocalTime.MIDNIGHT),LocalDateTime.now());
long yearsSinceGuss = ChronoUnit.YEARS.between(LocalDate.now(),
LocalDate.now().plus(
TimeUnit.SECONDS.toDays(
durationSinceGuss1.getSeconds()),
ChronoUnit.DAYS) );
/*
* ERROR -
* LocalDate.now().plus(durationSinceGuss1) causes an Exception.
* Seconds are not Supported for LocalDate.plus()!!!
* WHY OR WHY CAN'T JAVA DO WHAT COMPILER ALLOWS ME TO DO?
*/
//long yearsSinceGuss = ChronoUnit.YEARS.between(LocalDate.now(), LocalDate.now().plus(durationSinceGuss) );
/*
* ERROR -
* Still an exception!
* Even on explicitly converting duration to seconds.
* Everything like above. Seconds are just not allowed. Have to convert them manually first e.g. to Days?!
* WHY OR WHY CAN'T YOU CONVERT SECONDS TO DAYS OR SOMETHING AUTOMATICALLY, JAVA?
*/
//long yearsSinceGuss = ChronoUnit.YEARS.between(LocalDate.now(), LocalDate.now().plus(durationSinceGuss.getSeconds(), ChronoUnit.SECONDS) );
You calculate the Duration between a date in the past and now(), then try to calculate the years between now() and another date that is Duration into the future. WHY?!?!? --- Example: This year (2016) is a leap year, but leap day haven't happened yet. Duration from Feb 3, 2015 to today (Feb 3, 2016) last year is 365 days. 365 days from now is yesterdays date of next year (Feb 2, 2017). So exactly one year from original date, but your calculation would say that future date is less than one year from now. Result incorrect!!
Have you tried using LocalDateTime or DateTime instead of LocalDate? By design, the latter does not support hours/minutes/seconds/etc, hence the UnsupportedTemporalTypeException when you try to add seconds to it.
For example, this works:
LocalDateTime dateOne = LocalDateTime.of(1415, Month.JULY, 6, 0, 0);
Duration durationSinceGuss1 = Duration.between(dateOne, LocalDateTime.now());
long yearsSinceGuss = ChronoUnit.YEARS.between(LocalDateTime.now(), LocalDateTime.now().plus(durationSinceGuss1) );
System.out.println(yearsSinceGuss); // prints 600
Indeed, plus of LocalDateTime works with seconds! Or Even with Duration itselfe!
Use Period to get the number of years between two LocalDate objects:
LocalDate before = LocalDate.of(1415, Month.JULY, 6);
LocalDate now = LocalDate.now();
Period period = Period.between(before, now);
int yearsPassed = period.getYears();
System.out.println(yearsPassed);
Hi! I am aware of periods. This all was about converting DURATION to years, e.g. if someone passes it to you from outside.
Personally I find day precision (using Period) much more appealing in historic context than second precision (using Duration). However, constructing LocalDate.of(1415, Month.JULY, 6) is probably wrong because java.time only uses the gregorian calendar rules for all times.
Although the accepted answer of @Matt Ball tries to be clever in usage of the Java-8-API, I would throw in following objection:
Your requirement is not exact because there is no way to exactly convert seconds to years.
Reasons are:
Most important: Months have different lengths in days (from 28 to 31).
Years have sometimes leap days (29th of February) which have impact on calculating year deltas, too.
Gregorian cut-over: You start with a year in 1415 which is far before first gregorian calendar reform which cancelled full ten days, in England even 11 days and in Russia more. And years in old Julian calendar have different leap year rules.
Historic dates are not defined down to second precision. Can you for example describe the instant/moment of the battle of Hastings? We don't even know the exact hour, just the day. Assuming midnight at start of day is already a rough and probably wrong assumption.
Timezone effects which have impact on the length of day (23h, 24h, 25h or even different other lengths).
Leap seconds (exotic)
And maybe the most important objection to your code:
I cannot imagine that the supplier of the date with year 1415 has got the intention to interprete such a date as gregorian date.
I understand the wish for conversion from seconds to years but it can only be an approximation whatever you choose as solution. So if you have years like 1415 I would just suggest following very simple approximation:
Duration d = ...;
int approximateYears = (int) (d.toDays() / 365.2425);
For me, it is sufficient in historic context as long as we really want to use a second-based duration for such an use-case. It seems you cannot change the input you get from external sources (otherwise it would be a good idea to contact the duration supplier and ask if the count of days can be supplied instead). Anyway, you have to ask yourself what kind of year definition you want to apply.
Side notes:
Your complaint "WHY OR WHY CAN'T JAVA DO WHAT COMPILER ALLOWS ME TO DO?" does not match the character of new java.time-API.
You expect the API to be type-safe, but java.time (JSR-310) is not designed as type-safe and heavily relies on runtime-exceptions. The compiler will not help you with this API. Instead you have to consult the documentation in case of doubt if any given time unit is applicable on any given temporal type. You can find such an answer in the documentation of any concrete implementation of Temporal.isSupported(TemporalUnit). Anyway, the wish for compile-safety is understandable (and I have myself done my best to implement my own time library Time4J as type-safe) but the design of JSR-310 is already set in stone.
There is also a subtile pitfall if you apply a java.time.Duration on either LocalDateTime or Instant because the results are not exactly comparable (seconds of first type are defined on local timeline while seconds of Instant are defined on global timeline). So even if there is no runtime exception like in the accepted answer of @Matt Ball, we have to carefully consider if the result of such a calculation is reasonable and trustworthy.
Hey Meno!I understand your point and somehow I understand the strategy, which the new Date API is following. Throw an exception, if an exceact solution is not possible.
BUT. Java somehow DID implement the conversion of days, months etc. to years.
So why should the conversion of seconds (and so durations) should be different?
The current API feels like a minefield for me. I would appretiate approximations instead of runtime exceptions.
@Skip Well, a plain calendar date (like LocalDate) has no relationship to seconds, the granularity of calendar dates is not defined in such a precision. Therefore it is a good thing to not support applying second-based durations on calendar dates. So users can be warned when ever they make precision errors. It is not so good however that the compiler does not complain about such code so the new API can indeed be sometimes a minefield at runtime. This behaviour relates to all uses of static from()-methods or low-level-interfaces Temporal, TemporalUnit etc.
@Skip By the way, I don't see any automatic conversion from days to years (because of different month length) unless you define a reference date. And I think, Period has no mechanism for such a conversion. Do you know it better?
A Plain calendar date may not have a relationship to seconds. What I am crying about is the following: LocalDate.now().plus().(days, ChronoUnit.DAYS) // legal but LocalDate.now().plus().(duration) // NOT legal . Even if it is known, how to convert duration's unit (which are seconds) to days and java provides API to do so. long days = TimeUnit.SECONDS.toDays(duration.getSeconds()), ChronoUnit.DAYS) So for me it is one more unnessessary mine during the runtime. And this is not the only one
By converting days to years I ment the expression, using ChronoUnit.YEAR.between() which may not return correct results, as you stated above correctly.
|
STACK_EXCHANGE
|
Bitcoin Core 0.14.2 Released
09/03/2020 · From Bitcoin Core 0.17.0 onwards, macOS versions earlier than 10.10 are no longer supported, as Bitcoin Core is now built using Qt 5.9.x which requires macOS 10.10+. Additionally, Bitcoin Core does not yet change appearance when macOS “dark mode” is activated. In addition to previously supported CPU platforms, this release’s pre-compiled.
Bitcoin Core is a community-driven free software project, released under the MIT license. Verify release signatures Download torrent Source code Show version history. Bitcoin Core Release Signing Keys v0.8.6 – 0.9.2.1 v0.9.3 – 0.10.2 v0.11.0+ Or choose your operating system. Windows exe – zip. Mac OS X dmg – tar.gz. Linux (tgz) 64 bit – 32 bit. ARM Linux 64 bit – 32 bit. Linux (Snap Store.
The digital asset lending industry is proving to be one of the first breakout use cases within the crypto ecosystem providing.
Bitcoin Share Price History 4 Apr 2020. Compared, for example, to the stock market, bitcoin has more than held in there lately. Whether it. Bitcoin's daily price chart looks like this. Marcus Hutchins put a stop to the worst cyberattack the world had ever seen. Then he was arrested by the FBI. This is his. Currency Charts. Review historical
Aside from the recent downturn, Bitcoin and other cryptocurrencies have done extremely well over the past two months. From.
Bitcoin Core version 0.14.2 is now available. This is a new minor version release, including various bugfixes and performance improvements, as well as updated translations. Please report bugs using the issue tracker on GitHub. Subscribe here to receive security and update notifications. Compatibility Bitcoin Core is extensively tested on multiple operating systems using the Linux.
FastBitcoins now allows users to dollar-cost average their bitcoin investments, making it even easier to stack sats and.
2 May 2019.
Once that's done, download & compile the new version. Install the latest version of Bitcoin Core. To do so, you can follow all other steps to self-.
Python. Project description; Project details; Release history; Download files.
The RPC interface, bitcoin.rpc , is designed to work with Bitcoin Core v0.16.0.
25 Nov 2019.
1 released: What you need to know. The biggest change is that the latest version of Bitcoin Core defaults to bech32 addresses, which offer a.
26 Jul 2019.
Bitcoin Core is the full blockchain client software for the dominant Bitcoin chain listed under the ticker "BTC" on most digital currency exchanges.
Bitcoin Core 0.14.2 has been released with a security fix for users who manually enable the UPnP option. Please note: UPnP has been disabled by default since Bitcoin Core 0.10.3; you only need.
Bitcoin Core version 0.14.2 is now available from: https://bitcoin.org/bin/bitcoin-core-0.14.2/ This is a new minor version release, including various bugfixes and.
Bitcoin Blender Bitcoin Blender not logged in. Home; Login; Forgot Password; Register; Quick mix; Mix your bitcoins without registering an account. Minimum amount is 0.001 BTC after fees! If you have a Quick Mix ID from previous mixes enter it here, or if you want to check the status of a Quick Mix in progress enter it
Bitcoin Price Euro Kraken Users have moved over 310,000 bitcoins from exchanges since mid-March . Over 310,000 bitcoins have been moved from. 15 Apr 2020. Dan Held, director of business development for the crypto exchange Kraken, predicts a rally in Bitcoin's price to $1 million. The Bitcoin, in spite of all its advantages and charm, has not become a
Seamless login for the decentralized apps (dapps) of any blockchain. That’s the vision for Torus Labs’ new identity solution,
16/01/2018 · Bitcoin Core Version 0.14.2 Released. btcihowtoinvest Без рубрики 16.01.2018 2 Minutes. Table of Contents. Compatibility; Notable changes; miniupnp CVE-2017-8798; Known Bugs; 0.14.2 Change log; RPC and other APIs; P2P protocol and network code; Build system; Miscellaneous; GUI; Wallet; Credits; Bitcoin Core version 0.14.2 is now available. This is a new minor version release.
24 Nov 2019.
Today, November 24, 2019, marks the official release of Bitcoin Core 0.19.0, the 19th major release of Bitcoin's original software client.
|
OPCFW_CODE
|
SQL Server Hardware Configuration Best Practices
You have been asked to deploy a brand new SQL Server instance. Your management asks you to come up with the best balance of availability, performance and cost for SQL Server. What do you recommend?
I'm going to try to describe my recommendations for hardware and server configuration best practices. However, let me just say that best practices are somewhat of a sticky issue. Almost everybody has an opinion just like almost everybody has a nose. I am hoping that this article will generate a good deal of helpful discussion. I really hope community members keep it to a helpful discussion however.
The whole point of High Availability (HA) is to have your service available to respond to requests, to have your database available and not have consistency issues. There are four main ways to provide HA; power, redundant hardware, clustering and multiple copies of your database.
Your server at a minimum should have dual power supplies connected to a UPS. For extra protection, it is good to have separate power feeds as well.
As listed above, you should have dual power supplies, and for disks, some form of RAID that provides redundancy. Jeremy Kadlec describes the different types of RAID in his article Hardware 101 for SQL Server DBAs. In general, RAID 1, 5, 6, 10 or 0+1 will provide redundancy. You should always configure hot spares when using RAID with redundancy. That way, you have to have at least two failures before you actually go down.
It's not exactly redundant hardware, but backups should always be on another device, like a file server. That way if your SQL Server completely melts into the ground, your backups are safe elsewhere.
Clustering typically refers to combining two or more servers together to allow them to either share workload, or takeover workload quickly in the event of a failure. When it comes to SQL Server, there are two types of clustering we are interested in. Windows Server Cluster is the combination of two or more Windows Servers into a cluster. SQL Failover Cluster Instance (FCI) is an instance of SQL Server that can run on the nodes of a Windows Server Cluster.
So I should point out here that it is possible to create a Windows Server Cluster that only has one node. Why you might ask would you want to do that? There are a couple of reasons:
- If you start out with a one node Windows Server cluster and install an FCI, you can add more nodes later and then install a SQL FCI on them as well. If you start out with a stand-alone Windows Server and install a stand-alone SQL Instance you cannot convert them into a cluster later.
- A SQL FCI has a virtual computer object associated with it, which means if you use a non-default instance name, you can use the virtual computer object name to refer to the instance instead of the format: \[InstanceName].
The way you configure the quorum for your cluster is critical. There are way too many factors to consider for me to explain all of them here. The way you configure your quorum determines how many layers of failure are necessary before the cluster fails. Please see this article for more information: Understanding Quorum Configurations in a Failover Cluster.
Multiple Copies of Database
Another way to provide HA is to have multiple copies of your database available that can be switched to if needed. Copies of databases are usually described as some sort of standby (hot, warm, cold). Each has it's features as described below:
- Hot Standby - Hot standby means a copy of the database is available immediately with no intervention from a DBA. Hot standby usually uses something like Peer to Peer replication to keep more than one copy of the database available for immediate use. Hot standby will require the client to be smart enough to reconnect to the other database if the original fails. Theoretically only current transactions would be lost if you have to failover to a hot standby database.
- Warm Standby - Warm standby means a copy of the database is available quickly with little intervention from a DBA. Mirroring is a good example of a warm standby database. It is easy to switch from the primary to the mirror, and can even be accomplished automatically if a witness server is used. Client connections are often smart enough to reconnect if the primary fails over to the mirror. With mirroring there are two modes of operation, synchronous, and asynchronous. In synchronous mode, only current transactions would be lost. In asynchronous mode, only transactions since the last update was applied would be lost. Typically this would be only a few seconds at the most.
- Cold Standby - Cold standby means there is a copy of the database that can be made available, but it may take some work, and it is not likely to be easy to switch the roles of the databases. Log shipping is an example of making a cold standby copy of a database. In cold standby mode with log shipping, several minutes of transactions might be lost in a failover.
Microsoft has changed mirroring starting with SQL Server 2012. They are deprecating mirroring and they are replacing it with AlwaysOn Availability Groups. There are several differences, but a couple really stand out for this discussion. First, you are now able to have a secondary database that is read-only. Before the mirror was always offline in a restoring mode, so now the secondary (mirror) is able to be used for reporting or other read-only uses. Another interesting feature is that as the name implies databases can be grouped together into Availability Groups. Now you can fail over a group of databases instead of only one at a time.
When it comes to performance, the main factors include: hardware bottlenecks, database design and application design (server processing vs. client processing and locking). In this tip, I am only going to speak to hardware bottlenecks. Might look into database design and application design later.
The primary types of bottlenecks are disk IO, memory, CPU and network IO.
Disk IO speed is critical to performance. There are three places that disk IO can impact performance; Data Files, Log Files and TempDB files. To a lesser degree, the Windows swap file can sometimes impact performance. For the data files, expect random IO, for log files, it will pretty much always be sequential IO, and TempDB is the same for its data and log files. Different types of RAID work better with sequential IO than with random IO. If I can afford it, I try to use at least four disks in a RAID 10 configuration for each data volume, and two disks in a RAID 1 configuration for a log volume. I sometimes use RAID 5 when performance isn't as much an issue, because you get more capacity and still have one layer of redundancy. See Hard Drive Configurations for SQL Server for more information on choosing a RAID type.
It is best to separate the following loads into different volumes: backups, logs, data and TempDB. While I am on the subject DO NOT partition a disk or RAID volume into multiple drives. This will force disk head thrashing which is BAD. Also pay attention to the allocation unit size when formatting the volumes. In general I have found that 64k provides the best throughput. Each configuration can vary, so it is a good idea to test your IO performance with a tool like SQLIO before you put it into production to make sure you are getting good performance. See the tip Benchmarking SQL Server IO with SQLIO for more information on using SQLIO.
Data File IO
SQL Server is really good at multi-threading IO if there is more than one file available, so for data files where performance is important, it is often a good idea to have more than one. There are guidelines all over the Internet regarding the optimum number of data files compared to the number of CPU cores or the number of sockets, but I don't really have a good recommendation. Keeping the IO balanced is important, so if you add a file to an existing database, make sure you spread the load out over the files. As a general rule of thumb, I go with four data files for databases where I think performance will be an issue.
SQL Server likes to cache data pages in RAM, so the more RAM you can throw at a SQL Server instance, usually the better. This means that 64 bit is a must. It is also important to note that SQL Server uses memory besides what you assign as the min and max memory values. For one thing, it uses memory for each connection to the instance. The OS also needs memory, so for my servers, I usually set aside 8GB for OS and SQL overhead. I used to see the recommendation to set aside 4GB for OS and SQL overhead, but lately I have run into problems with that amount.
It might be good to talk about the min and max settings. The min setting is what SQL Server uses to decide if it has enough memory to startup. If there isn't at least enough free memory to match the min memory setting when SQL tries to startup it will abort and say there isn't enough memory. The max memory setting is the maximum amount of memory that SQL Server will use if it is available.
We have already established that we need both a 64 bit processor and OS to support as much memory as possible. I have rarely had a problem with CPU being a bottleneck with my SQL Servers.
Things to watch out for that might have a CPU bottleneck are things that can utilize parallelism. Index rebuilds, seeks, and joins are some definite culprits.
In general, there are at least three types of traffic at your SQL Server; data coming in, data going out and management traffic. If you follow the advice from above where your backups are on another device like a file server, you will also have backup traffic going out. You also might have block storage traffic if you are using ISCSI.
It is a good idea to segment your network traffic by types if possible, so it does not interfere with other types. For example, if you can setup a dedicated network adapter for your backups, and one for your ISCSI, they will not interfere with your data traffic.
It is also a good idea to encourage your application team to limit as much as possible the data they pass between the SQL Server and the Application server. Using WHERE clauses, not using SELECT * and so forth will limit the amount of data returned with a query. Using stored procedures is also a good way to keep the data on the SQL Server.
- I recommend spending time with your favorite search engine researching the following:
- RAID Performance
- Allocation Unit Size and performance with SQL data pages
- Eliminating performance bottlenecks with SQL server
- I am recommending using a minimum of four volumes for a SQL instance. In addition, you will have your OS volume. If running multiple instances, it might be a good idea to look into using mount points to limit the number of drives you are looking at.
About the author
This author pledges the content of this article is based on professional experience and not AI generated.
View all my tips
|
OPCFW_CODE
|
First launch will always result in a crash, subsequent launches work fine
MC 1.19.2
JumpQuilt 2.3.0
Make a new instance with Forge 43.2.0 (or any version, really)
Add JumpQuilt
Launch the game and crash
Launch the game again - now everything works
Logs: https://gist.github.com/SplendidAlakey/b80dc6af74c4bed007529a9165707338
Can't reproduce.
Works on first launch over here.
Could it be something system related? I can reproduce it 100% consistently on Prism Launcher and CF app. Running Nvidia GTX980 and 16GB RAM at 2133mhz, Windows 10.
I don't have any Nvidia GPU or Windows system able to run MC to verify that, but that doesn't sound like JumpQuilt's fault.
Hmm.. alright. I suspect it's something to do with how Windows manages processes. The error used to be a bit different on previous versions of Forge, it said something along the lines of "[...] can't initialize [...], because library is already in use [...]". So I chalked it up to JumpQuilt being technically ran on Forge first, and so some necessary library was being used by Forge already, which made Quilt crash. I only reported it now, because the error message changed to what it is now, so I thought maybe it's something else.
If you believe it's not a JQ issue, feel free to close this. Otherwise, I'll keep it open for visibility. I've seen at least 1 other person over at Quilt's Discord having the same issue a while ago.
Thinking a bit more about it, I'd blame the download progress window, which uses LWJGL already, which might cause it to refuse to initialize a second time.
Can you reproduce this using Jumploader as well?
I tried supplying a config, that disables that window, before the initial launch. Didn't help.
Jumploader doesn't work for me at all:
Exception caught from launcher
java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at io.github.zekerzhayard.forgewrapper.installer.Main.main(Main.java:57)
at org.prismlauncher.launcher.impl.StandardLauncher.launch(StandardLauncher.java:88)
at org.prismlauncher.EntryPoint.listen(EntryPoint.java:126)
at org.prismlauncher.EntryPoint.main(EntryPoint.java:71)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make field private static final java.lang.Object java.nio.file.spi.FileSystemProvider.lock accessible: module java.base does not "opens java.nio.file.spi" to module jumploader
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:178)
at java.base/java.lang.reflect.Field.setAccessible(Field.java:172)
at LAYER SERVICE/jumploader@2.1.3/link.infra.jumploader.launch.ReflectionUtil.reflectStaticField(ReflectionUtil.java:17)
at LAYER SERVICE/jumploader@2.1.3/link.infra.jumploader.launch.serviceloading.FileSystemProviderAppender.handlePreLaunch(FileSystemProviderAppender.java:37)
at LAYER SERVICE/jumploader@2.1.3/link.infra.jumploader.launch.PreLaunchDispatcher.dispatch(PreLaunchDispatcher.java:21)
at LAYER SERVICE/jumploader@2.1.3/link.infra.jumploader.Jumploader.onLoad(Jumploader.java:132)
at MC-BOOTSTRAP/cpw.mods.modlauncher@10.0.8/cpw.mods.modlauncher.TransformationServiceDecorator.onLoad(TransformationServiceDecorator.java:53)
at MC-BOOTSTRAP/cpw.mods.modlauncher@10.0.8/cpw.mods.modlauncher.TransformationServicesHandler.lambda$loadTransformationServices$11(TransformationServicesHandler.java:116)
at java.base/java.util.HashMap$Values.forEach(HashMap.java:1065)
at MC-BOOTSTRAP/cpw.mods.modlauncher@10.0.8/cpw.mods.modlauncher.TransformationServicesHandler.loadTransformationServices(TransformationServicesHandler.java:116)
at MC-BOOTSTRAP/cpw.mods.modlauncher@10.0.8/cpw.mods.modlauncher.TransformationServicesHandler.initializeTransformationServices(TransformationServicesHandler.java:48)
at MC-BOOTSTRAP/cpw.mods.modlauncher@10.0.8/cpw.mods.modlauncher.Launcher.run(Launcher.java:87)
at MC-BOOTSTRAP/cpw.mods.modlauncher@10.0.8/cpw.mods.modlauncher.Launcher.main(Launcher.java:77)
at MC-BOOTSTRAP/cpw.mods.modlauncher@10.0.8/cpw.mods.modlauncher.BootstrapLaunchConsumer.accept(BootstrapLaunchConsumer.java:26)
at MC-BOOTSTRAP/cpw.mods.modlauncher@10.0.8/cpw.mods.modlauncher.BootstrapLaunchConsumer.accept(BootstrapLaunchConsumer.java:23)
at cpw.mods.bootstraplauncher@1.1.2/cpw.mods.bootstraplauncher.BootstrapLauncher.main(BootstrapLauncher.java:141)
... 8 more
Exiting with ERROR
Process exited with code 2.
As I'm still unable to reproduce this, I'll have to ask you to help debugging this.
Could you please attach a debugger and set a breakpoint on ExceptionInInitializerError?
Nothing shows up, when I'm looking specifically for ExceptionInInitializerError. If I look for any exception, I get a ton, although not sure how relevant, since all of them are caught and handled.
Happens to me too on both Quilt beta and release channels, and on Adoptium and Amazon Corretto JDKs.
My logs are practically the same:
Quilt Loader crash log on first launch.
Minecraft log on first launch crash.
Subsequent Minecraft log after first launch crash.
There is a modpack on CurseForge using JumpQuilt that warns "sometimes the pack crashes on first launch please launch a second time and it will launch successfully."
I am noticing exact same bug.
OS: Linux x64 Arch Linux
GPU: Nvidia RTX 3060
Launching with 3rd party launch PortableMC
Sounds like an Nvidia driver bug, then.
But as said, without you helping further debugging this as mentioned above, I have no way of debugging or fixing anything here.
CF now natively supports QL, so closing this.
|
GITHUB_ARCHIVE
|
Samba 4 build system - more thoughts on scons
jelmer at samba.org
Sat Sep 17 19:00:56 GMT 2005
-----BEGIN PGP SIGNED MESSAGE-----
The Samba 4 build system is getting more and more enhanced. We
originally started out with a traditional autoconf system. I.e. we
generated a configure script from configure.in that then substituted
several values in Makefile.in.
Metze and I have gradually been changing this to an autogenerated
Makefile. We just abandoned the templating mechanism and are now
generating the Makefile directly from the data gathered by configure
and various .mk file in the source tree. Perhaps we'll eventually end
up with a perl-based configure. We could of course continue on this
road and reinvent the wheel , but I'd rather use some other solution
instead - that would save us some time and the pain of maintainance.
The problems with the combination of autoconf and make are:
- - hard to write (various technologies thrown together). Writing M4
that generates shell scripts that generate perl code that generates
Makefiles is not really the ideal way of working.
- - generates large amount of data (800kb configure script!) that needs
to be in the tarball
- - incompatibilities between various versions of make
- - slow
- - portability layer and build data thrown together
- - duplicated work (other projects need to have the same tests, code)
- - cross-compilation hard to get right (we currently have it, but the
HOSTCC code is rather hackish)
Several alternatives are available - most of which integrate the
configure and build stages. However, we have a large number of
requirements for our build system:
- - distinction between different targets of compilers (build / host)
- - ability to build certain subsystems stand-alone
- - generate pkg-config files for libs
- - it must support custom file types that can be compiled (IDL, asn1,
heimdal et, SWIG, etc)
- - must run on exotic UNIX systems (AIX, IRIX, CrayOS, etc)
- - no external dependencies (e.g. not requiring the installation of
python) in order to build. We consider cc, make and perl valid
requirements for building.
- - optional init function per subsystem that must automatically on
- - support for automatic header dependencies (header changes, all files
that use it get recompiled too)
These are pretty heavy requirements. Probably one of the only
applications that I could fine that provides all these requirements is
<a href="http://www.scons.org/">scons</a>. It is small enough to
include in the source tree, but unfortunately depends on python.
We discussed scons earlier and then didn't look at it further because
it was in python, but I'm a bit more convinced we need something like
scons rather then the complicated setup we have now.
One of the ways of solving this could be writing a python script that
generates a configure script and Makefile.in that can then be used on
hosts that do not have python installed. This would be a bit similar
to the 'autogen.sh' script we have currently. I'm not sure how
feasible writing such a script is yet, but I'll be looking at writing
one during the next few days.
Comments ? Thoughts?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (GNU/Linux)
-----END PGP SIGNATURE-----
More information about the samba-technical
|
OPCFW_CODE
|
Coding Practices and Technologies for Website Migration
- Website Overview
The following sections list and explain the technologies to be utilized in the new website.
- Web Browser Support
To provide the best overall experience for the website users we only support the latest versions of the following web browsers and platforms. On Windows, we support Internet Explorer 8-11 and Edge.
|Mac OS X||✓||✓||NA||NA||✓||✓|
If other web browsers/versions are required additional costs will need to be determined. However, it is our professional opinion that these are the most important target browser based on current market share statistics.
The latest web browser market share statistics can be found at:
- Web Accessibility Initiative (WAI) standardsThe website will be designed to meet Web Content Accessibility Guidelines (WCAG), a standard set forth by the WorldWide Web Consortium (W3C), and subset of the Web Accessibility Initiative (WAI).The WAI has developed a set of standards internationally regarded as the standard for Web accessibility. Accessibility of this kind is employed for the benefit of people with disabilities, such as those with auditory, cognitive, neurological, physical, speech, and visual impairments; additionally, it addresses the changing needs of aging adults.Meeting these standards not only allows a greater reach, but will improve the content architecture of the website, allowing for greater search engine visibility, and in turn increasing the overall visibility (and accessibility) of theMore information on WAI compliance can be found at: http://www.w3.org/WAI
- W3C Standards Validation
In meeting WCAG standards validation, we will also be meeting the greater W3C CSS3 & HTML5 markup validation standard; a parent standard to the WCAG standard subset. HTML5 will be used as a measure for W3C validation.More information on W3C can be found at: http://www.w3.org
More information on HTML5 Markup Validation can be found at: http://validator.w3.org
More information on the CSS Validation Service can be found at: http://jigsaw.w3.org/css-validator
- HTML5 Validation
The website will be coded in HTML5. HTML5 is a language for structuring and presenting content for the Web. It is the latest revision of the HTML standard (originally created in 1990) and currently remains under development. Its core objectives have been to improve and simplify the HTML language with support for the latest multimedia, while keeping it easily readable by humans and consistently understood by computers and devices (web browsers, parsers etc.).
HTML5 is the next step in delivery of web content. In addition to the aforementioned enhancements in the markup language, we see HTML5 as being the logical platform for extending the life expectancy of this iteration of company’s website.
|
OPCFW_CODE
|
I’m sure there is a lot to Model Glue I haven’t touched yet. For instance I haven’t even looked at the Coldspring tie in with MG. I also haven’t done any IOC (dependency injection stuff) using ChiliBeans within MG. However, I have been mucking with ConfigBeans.
To be honest, at first, I thought they seemed like overkill. But like all short-sighted people thinks quickly got blurry for me until I put on my ConfigBean glasses and could see clearly. Quite frankly they are handy.
So, buy now you must be wondering why I’m droning on about this request scope stuff when I started talking about ConfigBeans. Well in my dig into Model Glue I decided to port this application as a good “indepth” exploration of the framework. In doing this port I want to get away from using the Request Scope unless absolutely necessary (I might post later on when it is necessary). These page specific menu’s are an ideal candidate for moving from the request scope into a ConfigBean.
Now, there are two options to making this move; I could make one ConfigBean that stores all the page menu options for all the pages. Or I could create one ConfigBean for each page that has custom menu items.
In this initial dig into the idea I am using the first option. One big ConfigBean that consists of a structure (each key is a page) and each structure consists of an array of menu item structures. You’ll notice the array of menu item structures is the same. I basically didn’t have to do any work at all in converting my view in ModelGlue because of this.
Now, on the onRequestStart event I load into the event data the pages menu items. Then at the top of the page menu view instead of assigning my local menuItems variable the contents of Request.Page.Menus I just assign it to viewState.getValue(“pageMenu”)
OK, admittedly, this example is pretty specific to my application so here is another instantly useful example that you might find applicable.
In the documentation about ConfigBeans Joe mentions they are a great candidate for storing datasource information - then you pass in the datasource bean to your DAO instead of the datasource name etc. This way it’s all encapsulated.
Well, with this same app I actually have to have two datasources defined. The primary datasource (an Oracle 9i database) and an import datasource (an access database). The import database just points to a totally empty place holder access mdb file. But with CFMX 6+ the only way to dynamically connect to an access database is to have this placeholder one in existence.
<cfquery name="manuals" datasource="placeholderAccessDatabase"> SELECT * FROM sometable IN 'full path to the actual access database to read from' </cfquery>
Again I have two choices; one datasource ConfigBean with info about both datasources (getDSN and getImportDSN) or two ConfigBeans - one for each. Again I chose the former and just have one Datasource ConfigBean. In this example it just makes sense to me to have one bean for all DS information especially since my importation actions all need to get the data from the access database, muck with it, then stick the transformed data into the oracle database.
Ok, so that one still isn’t applicable? Well how about if you’re using MG and you have one primary layout view that takes the content you build in your subviews and then sticks said content into its appropriate place in the main template. How do you give each page a unique title within the html ‘title’ tag? Well, I have a PageInformation ConfigBean that holds some general info about the page.
This particular application has a variety of entry points for the many different organizations that use it. You can go down different branches within it based merely on your entry point and based on the place you are at in the application it needs to show the primary organizations “look/feel”. So in my PageInformation ConfigBean I not only store the page’s title but also what organization(s) is the owner of the page. (plus those page menus I mentioned earlier also show some organization specific menu items depending on the organization (or section) the page belongs too).
So now I can quickly grab the organizations code and use it to load the appropriate CSS for that page. I can also use it to load the correct page menu items. And I can use that pages information to show a unique title for each page that has a title defined in the ConfigBean.
Admittedly, some of this stuff could have been stored in the database. But there are a couple caveats to my ability to do things to the database. First a specific customer owns the database and they have six or seven apps that use it. Adding new tables to the schema isn’t a very easy task (must go through a committee to get each table approved and they only do review sessions twice a year). Plus, I have tried to keep my apps specific data out of their way and have left the database purely for storage of business related data. Because of this the ConfigBeans work out really well.
Whew, sorry I droned on :O) hopefully I haven’t scared you away from my craziness and instead have illustrated some of the potential of ConfigBeans with ModelGlue.
|
OPCFW_CODE
|
Distributing KeePassXC as a snap
Tags: Interviews , Snaps , Ubuntu Desktop
This is a guest post by Jonathan White (find him on Github) one of the developers behind keepassxc. If you would like to contribute a guest post, please contact email@example.com .
Can you tell us about KeePassXC?
KeePassXC, for KeePass Cross-Platform Community Edition, is an extension of the KeePassX password manager project that incorporates major feature requests and bug fixes. We are an active open source project that is available on all Linux distributions, Windows XP to 10, and Macintosh OSX. Our main goal is to incorporate the features that the community wants while balancing portability, speed, and ease of use. Some of the major features that we have already shipped are browser integration, YubiKey authentication, and a redesigned interface.
How did you find out about snaps?
I learned about snaps through an article on Ars Technica about a year ago. Since then I dove into the world of building and deploying snaps through the KeePassXC application. We deployed our first snap version of the app in January 2017.
What was the appeal of snaps that made you decide to invest in them?
The novelty of bundling an application and deploying it to the Ubuntu Store, for free, was really attractive. It also meant we could bypass the lengthy review and approval process of the official apt repository.
How does building snaps compare to other forms of packaging you produce? How easy was it to integrate with your existing infrastructure and process?
The initial build of the snapcraft.yaml file was a bit rough. At the time, the documentation did not provide many full-text examples of different build patterns. It only took a couple of iterations before a successful snap was built and tested locally. The easiest part was publishing the snap for public consumption, that took a matter of minutes.
With the introduction of build.snapcraft.io, the integration with our workflow has improved greatly. Now we can publish snaps immediately upon completion of a milestone, or even intermediate builds for develop.
Do you currently use the snap store as a way of distributing your software? How do you see the store changing the way users find and install your software?
Yes, we use the snap store exclusively for our deployment. It is a critical tool for our distribution with over 18,000 downloads in less than 4 months! The store also ensures users have the latest version and it is always guaranteed to work on their system.
What release channels (edge/beta/candidate/stable) in the store are you using or plan to use?
We use the stable channel for milestone releases and the edge channel for intermediate builds (nightlies).
Is there any other software you develop that might also become available as a Snap in the future?
Not at this time, but if I ever publish another cross-platform tool, I will certainly use the ubuntu store and snap builds.
How do you think packaging KeePassXC as a snap helps your users? Did you get any feedback from them?
Our users are able to discover, download, and use our app in a matter of seconds through the Ubuntu store. Packaging as a snap also removes the dependency nightmare of different Linux distributions. Snap users easily find us on Github and provide feedback on their experience. Most of the issues we have run into involve theming, plugs, and keyboard shortcuts.
How would you improve the snap system?
First I would make it easier to navigate around the developer section of the ubuntu store. It is currently a little confusing on how to get to where your current snaps are. [Note: this is work in progress, stay tuned!]
As far as snaps themselves, I wish they were built more like docker containers where different layers could be combined dynamically to provide the final product. For example, our application uses Qt5 which causes the snap size to bloat up to 70 MB. Instead, the Qt5 binaries should be provided as an independent, shared snap that gets dynamically loaded with our application’s snap. This would greatly cut down on the size and compile time of the deployment; especially if you have multiple Qt apps which all carry their own unique build. [Note: Content interfaces were built for this purpose]
Reduce the number of plugs that require manual connection. It would also be helpful if there was a GUI for the user to enable plugs for specific snaps.
Finally, I had the opportunity to try out the new build.snapcraft.io tool. It seems like the perfect answer to keeping up to date with building and deploying snaps to the store. The only downside I found was that it was impossible to limit the building to just the master and develop branch. This caused over 20 builds to be performed due to how active our project was (PR’s, feature branches, etc). [Note: Great feedback! build.snapcraft.io is evolving this is definitely something we’ll look into]
Learn how the Ubuntu desktop operating system powers millions of PCs and laptops around the world.
|
OPCFW_CODE
|
Undoubtedly, GPT 4 is quite powerful than its predecessor. It is capable to generate more human-like responses. Even, with the help of ChatGPT, I have developed the Text To Audio Converter App. Yes, ChatGPT can act like your own personal Junior Developer. However, Smol AI takes a step further, it can create an entire codebase, once you give it the right prompt. However, for now, setting up Smol AI isn’t that easy for a normal user. But, if you want to try Smol AI now, then here’s how to set up Smol AI on Windows 11.
What is Smol AI?
It is basically an AI that writes codes for you. It is different from ChatGPT which writes small code snips. Smol AI can build out an entire codebase, even for complicated apps. Using Smol AI basically feels like having a coding team working for you, on your command.
How does Smol AI Works?
Smol AI is basically a Program written in Python Programming Language. And, you need to attach Open AI API key to the program. Because it uses ChatGPT to get answers.
First Smol AI passes product requirements into a prompt. So that, it can generate the architecture of the Apps. This means, what different files it needs to generate for the App. After that, it passes the list of files with the original product requirement to another prompt. That will generate a list of dependencies.
So that, GPT can understand what are the different functions and variables in different files. In the end, it runs a for loop, where it passes on both product requirements architecture and the dependencies to a new prompt. That will write the code for each individual file.
As per my understanding, Smol AI eliminates the need of asking multiple questions to ChatGPT manually. Yes, as of now Smol AI is quite buggy. This means modification in code still requires a Coder.
Create a new OpenAI API Key
The application Programming Interface is known as API. It allows the communication of two applications to each other. And, to use ChatGPT in Smol AI, you need to generate an OpenAI API Key.
Open the following Open AI Platform Site in your web browser.
Once, the developer-main zip file gets downloaded. Do Right-click on it and then click on Extract All > Extract.
Now, open Visual Studio Code, and make sure Python is installed properly. From the left side, click on “Open Folder”.
Select the “developer-main” folder, and click on “Select Folder”. (If there is another folder with the same name inside the “developer-main” folder. Then, do select that one. Else, you will get errors later in the Terminal.)
Click on the “Yes, I trust the authors” option.
Now, In the Visual Studio Code Explorer section, you will find the “.example.env” file. Do Right-click on it and then click on Rename. Do remove “.example” from the name, and keep it “.env“.
Now, select “.env” file and enter your Open AI API Key, by replacing the example API Key.
Install Smol AI Dependencies.
Go to the Modal website and do SignUp using your GitHub Account.
Once, you complete the signup process, it will show you commands to configure the Python Client.
Go back to Visual Studio Code, and, from the left pane, do select the “main.py” file.
After that, Click on Terminal > New Terminal.
Once the Terminal gets open at the bottom of the screen. Run the following Command.
pip install modal-client
After that, run the new command to create a token for the Modal.
modal token new
Once, you run the command it will take your default browser to create a token. Click on Create Token.
Once, the Token gets successfully created. You will see the Token Verified Sucessfully message in the Terminal.
That’s it, now you can use Smol AI by running the following prompt. It’s an example, replace text inside the double quote and write your own prompt.
modal run main.py --prompt "a Chrome extension that, when clicked, opens a small window with a page where you can enter a prompt for reading the currently open page and generating some response from openai" --model=gpt-4
By default, it runs on gpt-3.5-turbo. But, if you have tokens for the GPT-4. Then, you can add “ --model=gpt-4” in the above command to use it.
Note: If you get the “You exceeded your current quote, please check your plans and billing details” error in Terminal. Then, make sure you have enough credits left in your account.
|
OPCFW_CODE
|
problem with executing csruntime --host localhost hello.calvin
Hi, I'm trying to execute this code $ csruntime --host localhost hello.calvin in the mini tutorial, but I get this:
2017-09-08 16:51:28,040 ERROR 2286-calvin.calvin.utilities.runtime_credentials: get_domain: error while trying to read domain from Calvin config, err=argument of type 'NoneType' is not iterable
2017-09-08 16:51:28,040 INFO 2286-calvin.calvin.runtime.north.calvincontrol: Control API trying to listening on: localhost:5001
DEPLOY STATUS 200, OK
Deployed application 38835e32-a853-449c-9aeb-1c402e5561c5
2017-09-08 16:51:30,965 INFO 2286-calvin.calvin.runtime.north.calvin_node: All done, exiting
2017-09-08 16:51:30,980 INFO 2286-calvin.calvin.runtime.north.calvin_node: Quitting node "['calvinip://localhost:5000']"
I suppose to get something like this:
Deployed application 922a2096-bfd9-48c8-a5a4-ee900a180ca4
2016-07-11 08:20:34,667 INFO 11202-calvin.Log: Hello, world
2016-07-11 08:20:35,667 INFO 11202-calvin.Log: Hello, world
Any help would be appreciated. Thanks!
2017-09-08 16:51:28,040 ERROR 2286-calvin.calvin.utilities.runtime_credentials: get_domain: error while trying to read domain from Calvin config, err=argument of type 'NoneType' is not iterable
This is actually OK. It just means that Calvin's security isn't configured on your machine and it will fall back to running with security disabled. The log message should really be changed to something more informative.
2017-09-08 16:51:28,040 INFO 2286-calvin.calvin.runtime.north.calvincontrol: Control API trying to listening on: localhost:5001
DEPLOY STATUS 200, OK
Deployed application 38835e32-a853-449c-9aeb-1c402e5561c5
This is actually confirmation that your example is running...
2017-09-08 16:51:30,965 INFO 2286-calvin.calvin.runtime.north.calvin_node: All done, exiting
2017-09-08 16:51:30,980 INFO 2286-calvin.calvin.runtime.north.calvin_node: Quitting node "['calvinip://localhost:5000']"
...and this is the runtime quitting as it is supposed to do when started like this.
However, the default is for the runtime to quit after 3s (!) when started with a file argument. I suspect that your runtime simply didn't have enough time to produce output. Try changing the keep-alive time to 10s or perhaps even longer
csruntime --host localhost --keep-alive 10 hello.calvin
and see if that helps.
The ability to add a script to the csruntime command is just convenience to allow quick testing of a script, the "normal" way to start a runtime is to start it without a script argument, in which case it will keep on running forever and you feed it applications using the cscontrol command.
Thank you! It works now.
|
GITHUB_ARCHIVE
|
This approach enables you deny access to public internet traffic, and to grant access only to specific Azure Virtual Networks or public internet IP address ranges. The related tasks overwrite the beginning of the file systems and remove backup partitions in addition to wiping the partition tables.
Are you sharing files all the time?
The old recovery key will no longer function, so it can be safely discarded. For example, let's say you want to modify the picture that is coming from the user's Facebook profile. The following briefly describes the typical upgrade procedure: If the pool is running ZFSv15, and a non-mirrored log device fails, is replaced, or removed, the pool is unrecoverable and the pool must be recreated and the data restored from a backup.
In self-refresh mode, if primary power is restored before the backup power is depleted 3—30 minutes, depending on various factorsthe system boots, finds data preserved in cache, and writes it to disk.
Display a tree of racks and datanodes attached to the tracks as viewed by the NameNode. If additional attributes need to be added to the profile, this can be done with Rules, as explained below.
There is a real risk of putting your Salt master into a death spiral. The PoweredBy Wiki page lists some of the organizations that deploy Hadoop on large clusters. If you do that, you could cut off all access to that storage account, which can cause major disruption.
If you wish to schedule the regular creation of snapshots, instead use Periodic Snapshot Tasks.
The FAQ Wiki page. No known data errors At this point, it is recommended to add disks to create a full mirror set. Once the reconfiguration task has completed, the user can safely umount the removed data volume directories and physically remove the disks. The administrator must reclaim these drives manually.
For example, you might give someone access to both blobs and files in your storage account. Encryption Icons Associated with an Encrypted Pool In order from left to right, these additional encryption buttons are used to: Snapshots can be created quickly and, if little data changes, new snapshots take up very little space.
For example, from osd. Gp2 volumes are ideal for a broad range of use cases such as boot volumes, small and medium-size databases, and development and test environments. For more information on tokens and claims see the Token Overview. Our top recommendations offer cutting-edge protection: Some of this information is also available on the NameNode front page.
You can create a file system on top of these volumes, or use them in any other way you would use a block device like a hard drive.
Shut down the system. One advantage of this approach is that there is no loss of redundancy during the resilver. The following briefly describes the typical hot swapping drive procedure: Specifying -upgrade -renameReserved [optional key-value pairs] causes the NameNode to automatically rename any reserved paths found during startup.
These can be regenerated manually using one of several methods, including, but not limited to using the Azure portalPowerShell, the Azure CLI, or programmatically using the. You can dynamically change the configuration of a volume attached to an instance. The NameNode will upload the checkpoint from the dfs.
This endpoint will return a result that does not include the results of any rules that alter the User Profile. Bear in mind that removing an OSD results in rebalancing of the whole cluster. For example, if the data1. Controlling access to the storage account keys controls access to the data plane for that storage account.
Disk Replacement is Complete 8. Cluster Setup for large, distributed clusters. The cache flush and self-refresh mechanism is an important data protection feature; essentially four copies of user data are preserved: Note striping RAID0 does not provide redundancy.
You can view the status of the resilver process by running zpool status. Auth0 provides templates for these scripts, but they must be modified as needed for the particular database and schema in use by a particular customer.ForwardHealth Provider Portal Medication Therapy Management Documentation Storage User Guide July 23, 3 Creating a New Medication Therapy Management Record.
User Guide. Welcome to the Azure Databricks User Guide. Here you will find the key topics that you need to be successful with Azure Databricks. The FY Gridded Soil Survey Geographic (gSSURGO) Database was released on January 20, These data are derived from a November 16,snapshot of the Soil Data Mart database (FY Soil Survey Geographic Database or SSURGO source).
Azure Data Lake Storage Massively scalable data lake storage; Bandwidth refers to data moving in and out of Azure data centers other than those explicitly covered by the Content Delivery Network or ExpressRoute pricing. Pricing details Inbound data transfers (i.e.
data going into Azure data.
external-storage / snapshot / doc / funkiskoket.com ff Feb 12, The Volume Snapshot Data that are bound to the Volume Snapshot are also automatically deleted. Managing Snapshot Users.
Press h to open a hovercard with more details. Cloud storage, regardless the type, should let you have the freedom to access and import the data you want from wherever you are, using whatever device you choose.
Some of the contenders we considered didn’t offer services to Windows and OS X, Android and iOS -- .Download
|
OPCFW_CODE
|
One of the things I do when I don't know what to do is ask for help. But since I practice shamanic journeying, my help is sometimes coming from non-ordinary reality, or the spirit guides with whom I have established relationships. Usually, a shamanic journey is a short meditative experience in which one intentionally alters one's brainwave state to something similar to the feeling you have just before waking or falling asleep. These are Theta brain waves, and they are encouraged by the sound of a steady drumbeat or a rattle.
So, last week I was looking for some help. I used the rhythm of a drumbeat to enter the spirit world and seek out my usual guides. I asked one of my guides if he would travel with me to find an animal that might have medicine for a particular question I needed help with. We walked just a short way, went over a ridge, and immediately saw a snake. She was beautiful--coppery colored with diamonds on her back. The presence of the snake surprised me. I never know what I will find in a shamanic journey, and I just couldn't imagine what kind of medicine a snake would have for me on this day.
The snake began to talk to me, to share her medicine. The snake told me she makes a serious commitment when she chooses to eat something. Knowing that it will take a long time to digest what she eats and that she will need to be someplace safe, she carefully chooses things that will truly nourish her.
I thanked the snake for her medicine and thanked my guide for coming with me on the journey. The drumming slowly stopped, and I 'woke' from my meditation.
I was amazed! This was something I had not considered important before. The things that we take into our being will most likely be with us for a long time. That the things we choose to "ingest" or "eat" must be beneficial to us in order to make such a commitment as to allow their presence in our being. Further, the process of eating or 'taking in' is a vulnerable one, and we are best served by a safe space to do this, a space where we cannot be hurt, where we can rest, where we can fully integrate that which will nourish us.
Of course, I was super curious about the digestive habits of snakes. A quick click or two led me to learn that snakes can take 4-10 DAYS to digest a meal. Snakes eat not only eggs and small rodents but animals as large as a deer! And they are very vulnerable to predators such as raptors while they are digesting, so they look for a safe, protected place to finish their meal.
So, what did the snake have to teach me? In the moment, on that day the medicine seemed to say that life is full of things that we are choosing whether or not to let in. A cup of coffee, a glass of wine, a french fry, a banana. And then there are bigger things like a pet, a partner, a job, a surgery, a baby. These things will be with us for a while, some much longer. Snake is telling me that I should wisely consider the things I "take in" and examine whether this "food," "relationship," "house," or whatever is truly, truly nourishing for me. If this food will truly care for my body, spirit, and mind. If the vulnerable state I will be in while integrating this "food" into my being will be protected by my family, friends, and spirit guides. Will I have support to start the new job? Will I be able to adopt a new furry companion while I have so much travel for my work? Will the nourishment afforded by the "food" match the needs and availability of my life?
Maybe a french fry doesn't seem like a big commitment. Or a new quilt for the bed. But to snake, everything is a big commitment. Snake sees that all nourishment, all good things, need to be carefully considered before opening wide and letting them in.
|
OPCFW_CODE
|
"delete_message" not working on sqs client under moto
I have a code like:
class SqsClientWrapper:
def __init__(self, sqs_client, queue_url):
self.sqs_client = sqs_client
self.queue_url = queue_url
def receive_message(self):
try:
while True:
response = self.sqs_client.receive_message(QueueUrl=self.queue_url, MaxNumberOfMessages=1)
if len(response.get("Messages", [])) > 0:
return response["Messages"][0]
except ClientError as e:
raise e
def delete_message(self, receipt_handle):
try:
response = self.sqs_client.delete_message(
QueueUrl=self.queue_url, ReceiptHandle=receipt_handle
)
except ClientError:
self.__logger.exception(
f"Could not delete the meessage from the - {self.__queue_url}."
)
raise
else:
return response
class Listener:
def listen(self, queue_url):
sqs_client = SqsClientWrapper(boto3.client("sqs"), queue_url)
while True:
try:
message = sqs_client.receive_message()
print(str(message))
sqs_client.delete_message(message["ReceiptHandle"])
except Exception as e:
continue
I'm trying to test the Listener using moto, I have the following test code:
logger = logging.getLogger()
class ListenerTest(unittest.TestCase):
def setUp(self) -> None:
self.executor = ThreadPoolExecutor()
self.futures = []
def tearDown(self) -> None:
for future in self.futures:
future.cancel()
self.executor.shutdown()
@mock_sqs
def test_simple(self):
sqs = boto3.resource("sqs")
queue = sqs.create_queue(QueueName="test-fake-queue")
queue_url = queue.url
listener = Listener()
self.futures.append(
self.executor.submit(listener.listen, queue_url)
)
sqs_client = boto3.client("sqs")
sqs_client.send_message(
QueueUrl=queue_url,
MessageBody=json.dumps({"id": "1234"}),
)
if __name__ == "__main__":
unittest.main()
When I run this, it seems to have received the message okay (I get the print-out), but when deleting it from the queue it fails with the following error:
botocore.exceptions.ClientError: An error occurred (InvalidClientTokenId) when calling the DeleteMessage operation: The security token included in the request is invalid.
Anyone know what might be going on here? Thanks!
Hi @raj-trustlab, Moto does not play well together with threads or concurrency frameworks. You may be better of testing the Listener/SqsClientWrapper class in a single thread.
On an unrelated note, something that may give you unexpected behaviour:
The mock-decorator is only scoped to the test-function at the moment, i.e. the setUp/tearDown-functions are currently not mocked, and any boto3-calls will not work inside of them.
You should use the class-decorator, to ensure it recognizes the setUp/tearDown-methods and mocks them appropriately.
@mock_sqs
class ListenerTest(unittest.TestCase):
...
FWIW, I can't actually reproduce the example you gave. The multi-threading doesn't seem to work, and it always loops inside the Listener.listen-method, without ever sending a message.
I think I was able to get it to work, but it's not super clean. What I needed to add was a sleep. In addition, in order to allow the main thread to make progress (which is what I believe @bblommers was causing your issue), I need to add some long-polling into the "receive_message" part of the code. I was going to do this anyway as a part of the implementation anyway, so not a huge deal. I think the reasoning may be that the python GIL may get unlocked long enough for the main thread to make progress.
Also, the thread doesn't exit cleanly, the InvalidClientTokenId still shows up, but my guess is that the test finished and the thread was being destructed.
Here is the code (in case it helps others in the future):
test_sqs_listener.py:
import unittest
import moto
import logging
import boto3
import json
import time
from sqs_listener import Listener
from concurrent.futures import ThreadPoolExecutor
logger = logging.getLogger()
class ListenerTest(unittest.TestCase):
def setUp(self) -> None:
self.executor = ThreadPoolExecutor()
self.futures = []
def tearDown(self) -> None:
for future in self.futures:
future.cancel()
self.executor.shutdown()
@moto.mock_sqs
def test_simple(self):
sqs = boto3.resource("sqs")
queue = sqs.create_queue(QueueName="test-fake-queue")
sqs_client = boto3.client("sqs")
queue_url = queue.url
listener = Listener()
self.futures.append(self.executor.submit(listener.listen, queue_url))
sqs_client.send_message(
QueueUrl=queue_url,
MessageBody=json.dumps({"id": "1234"}),
)
time.sleep(2)
if __name__ == "__main__":
unittest.main()
sqs_listener.py:
import boto3
from botocore.exceptions import ClientError
class SqsClientWrapper:
def __init__(self, sqs_client, queue_url):
self.sqs_client = sqs_client
self.queue_url = queue_url
def receive_message(self):
try:
while True:
response = self.sqs_client.receive_message(
QueueUrl=self.queue_url,
MaxNumberOfMessages=1,
VisibilityTimeout=30,
WaitTimeSeconds=20,
)
if len(response.get("Messages", [])) > 0:
return response["Messages"][0]
except ClientError as e:
raise e
def delete_message(self, receipt_handle):
try:
response = self.sqs_client.delete_message(
QueueUrl=self.queue_url, ReceiptHandle=receipt_handle
)
except ClientError:
self.__logger.exception(
f"Could not delete the meessage from the - {self.__queue_url}."
)
raise
else:
return response
class Listener:
def listen(self, queue_url):
sqs_client = SqsClientWrapper(boto3.client("sqs"), queue_url)
while True:
try:
message = sqs_client.receive_message()
print(str(message))
response = sqs_client.delete_message(message["ReceiptHandle"])
print(str(response))
except Exception as e:
print(str(e))
raise
Happy to hear this is fixed @raj-trustlab. Thanks for sharing the solution!
|
GITHUB_ARCHIVE
|
Allow conditionial parameter input
It might be useful to allow the parameters to be requested based on conditions, similar to what you do in the content section.
That way, you could have context specific parameters such as:
if ! PLASTER_PARAM_useDHCP
- ask for vm_ip
- ask for subnet mask.
Good idea!
Submitted pull request to add this enhancement (#255
It looks like you determined this functionality conflicts with the dynamic parameters implementation, @rkeithhill? Just wanted to cross-link that back to here if the issue is going to be closed for that reason.
Technically dynamic parameters would be fundamentally incompatible with conditional plaster parameters. If conditional parameters were left in as dynamic parameters then the responsibility for dealing with that situation would be on the template author. Either this should be made clear in the documentation or plaster parameters with a conditional attributes can simply be left out of the dynamic parameter creation process. This will force the interactive prompt for input at invocation (assuming the condition is met).
Either way I question the requirement for dynamic parameters in this project. It is simply one form of interactivity that the project already provides (with its own prompts). It also doesn't allow for automatic platyps documentation generation of the function at build time. Perhaps it would make more sense to allow passing a hashtable of parameters instead.
I just ran into the need for this feature on a new template that I was creating and was surprised it didn't exist. It just felt to natural to add a condition to a parameter like we do for the content.
My scenario is giving the user a choice to include something. If they choose to include it, prompt for additional information.
After reading this thread and the pull request, I would fully expect as the template author, that I have to account for that.
On that note, we could call it a PromptCondition to be more self documenting. Or process the conditional logic for each parameter in order and $null the values that fail the condition.
I will work past this by creating two different templates that duplicate a lot of content or just cut features that I planned on adding.
@michaeltlombardi I don't want to close this yet. I'd like to have this capability as well. It provides "wizard-like" alternate paths of prompts.
I wonder if we could add an attribute to <parameter> along the lines of "dependsOnParameter" e.g.:
<parameters>
<parameter name="AskOnlyOnLinux"
condition="$IsLinux" ... />
<parameter name="Editor" type="choice" ... />
<parameter name="VSCodeOptions"
condition="$PLASTER_PARAM_Editor -eq 'VSCode'"
dependsOnParameters="Editor"
type="multichoice" ... />
</parameter>
If a conditional parameter depends on multiple parameter values, then those parameter names would be listed as comma separated: dependsOnParameters="Editor, AskOnlyOnLinux". In the Windows (or macOS) case, the AskOnlyOnLinux parameter would never get a value. In that case, the value of $PLASTER_PARAM_AskOnlyOnLinux would be $null.
We would start out providing dynamic parameters on manifest parameters that aren't conditional and those that are conditional (and condition evals to true) but do no "depend on" the user-supplied value of any other parameter. Then, as parameter values are supplied by the user, we can evaluate the conditional parameters that "depend on" the value of other parameters and if those values have been provided, then add them to the dynamic parameter list (if their condition evals to true). I think that just might work. Need to test it out though.
Another option, and probably a better one from a template authors POV, is to just have Plaster find the dependent parameters by extracting the PLASTER_PARAM_ variable references in the condition. Then we don't have to worry about folks forgetting to provide a dependsOnParameters attribute. Yeah, I think I like this better:
<parameters>
<parameter name="AskOnlyOnLinux"
condition="$IsLinux" ... />
<parameter name="Editor" type="choice" ... />
<parameter name="VSCodeOptions"
condition="$PLASTER_PARAM_Editor -eq 'VSCode'"
type="multichoice" ... />
</parameter>
Thoughts?
My scenario was having a multiple choice selection. Then use that as a condition on future parameters.
Here is the exact parameter that I was trying to implement.
<parameter name='TemplateType'
type='choice'
default='2'
store='text'
prompt='Select the template type'>
<choice label='&Single TemplateFile'
help="Creates a template that contains a single TemplateFile"
value="Single"/>
<choice label='&Import from Directory'
help="Creates a template that will deploy a directory full of files"
value="Directory"/>
<choice label='&Empty'
help="Create an empty template with a manifest"
value="Empty"/>
</parameter>
<parameter name="FileName"
type="text"
prompt="Name of TemplateFile to create"
condition="$PLASTER_PARAM_TemplateType -eq 'Single'" />
<parameter name="SourceFolder"
type="text"
prompt="Source folder path to build template from"
condition="$PLASTER_PARAM_TemplateType -eq 'Directory'" />
So in that scenario, tab completion would initially only list -TemplateType. After that parameter value is specified, either -FileName or -SourceFolder would be available via tab completion depending on the value specified for -TemplateType.
I wonder if the original need is not another concept than Parameter :
<parameters>
<parameter name='Editor'
type='choice'
prompt='Select one of the supported script editors for better editor integration (or None):'
default='0'
store='text' >
<choice label='&None'
help="No editor specified."
value="None"/>
<choice label='Visual Studio &Code'
help="Your editor is Visual Studio Code."
value="VSCode"/>
</parameter>
<Condition condition="$PLASTER_PARAM_Editor -eq 'VSCode'" />
<parameters>
<parameter name="VSCodeOptions"
type="multichoice" ... />
</parameter>
<!-- Nested Condition ? -->
</parameter>
</parameter>
I don't think so. I think this fits nicely into the current parameter concept. It is just conditional. Like I said before, if it weren't for not having a good dynamic parameter solution (for automation use), I would merged in my previous attempt to do this.
Is it even possible to have multiple dynamic parameters dependent upon one another with conditional logic? I recall trying to do this for some other pet project and running into issues. Or maybe I just wasn't trying hard enough :)
Well, I thought it was because RuntimeDefinedParameter has both IsSet and Value properties. But apparently these do not get set while dynamicparam is still being processed. So, the idea of being able to "dynamically light up" dynamic parameters depending on the value of previously specified dynamic parameters is an apparent no go. :-(
I guess we're back to having dynamicparam list every parameter even if certain parameters wind up not being used. This can make automating the invocation of such templates a bit weird, requiring a few passes to determine which parameters to specify so you don't wind up getting prompts for unspecified parameters.
Would it make sense to define all the parameters and maybe shove the conditional logic into the validatescript attribute (after stripping out the plaster specific prefixes of course)?
Addressed with PR #255. Thanks @zloeber!
|
GITHUB_ARCHIVE
|
Arch linux on phone, with touch-oriented DE
There's (finally) starting to be a move towards "phones as portable desktops." I'm surprised it's taken this long. The basic idea is that phones are powerful enough to run full software and operating systems, so why not just give them a toggleable "desktop mode" where you can dock the phone with a full size keyboard, mouse and monitor and use it as you would a normal desktop.
But this being the case, i would like to have full control over the phone in the same way that i have full control over an arch desktop.
So i was wondering, is there a linux DE which is touch oriented/designed for phones? Would it perhaps be possible to use the android or lineage OS DE on top of an arch system? It would be nice to have a phone running arch with a mobile DE, and when docking it as a desktop i can boot up GNOME or KDE.
At this stage im fairly sure noone has actually done this, so my question is more "what moves have been made so far towards this vision?"
I totally agree that closing the gap between desktop and mobile is an interesting topic, but as it stands this Q seems to lack a bit of focus: one thing is listing available mobile DE or OS (e.g. Plasma Mobile, Ubuntu Touch); one thing is how to run GNU/Linux (e.g. Arch) on mobile devices; one thing is how to have a GNU/Linux-like DE running on top of an existing mobile OS.
As for now, most of the DE available on Arch are not made to work on Phones.
The most advanced DE for phones at this time (in a sense) is Phosh which is based on Gnome (and now available in the AUR).
However, lots of 'regular' DE (like XFCE4 for instance) will still work with a touchscreen but will not be adapted to a small screen of a mobile phone as they do not scale elements for very small screens.
Please keep in mind that most of Linux distros are not made with mobile phones in mind so some features can be missing (like calls or SMS/MMS).
If you are looking for a distro specifically made for mobile phones you should take a look at PostmarketOS (which is still very experimental at this time on most devices).
...so my question is more "what moves have been made so far towards this vision?"
That "vision" was alive and well about seven or eight years ago, and things were looking good. Back then I was able to install Debian on my old Samsung Galaxy Nexus.
Unfortunately, ever since then things have been doing in a terribly wrong direction: manufacturers started making more and more phones that are locked in ways that can't even be unlocked. Nowadays you can't even buy (at least here in the US) a phone that can be rooted, and believe me, I've tried. Honestly, I have no idea how they get away with that -- you pay good money to own that little machine, and yet, you can't have full control of it? I am hoping that at some point this will all boil over into lawsuites, like we had in the past, when some manufacturers tried to lock down personal computers. (Not much hope though, as the years go by, and people seem to care less and less.)
ArchlinuxARM with Phosh on Pinephone with convergence package (HDMI for external display, keyboard/mouse - either USB or bluetooth) seems to fully satisfy your stated requirements.
|
STACK_EXCHANGE
|
regression: cargo no longer passes -L native=(..)/build/dep-of-dep/out to rustc when cross compiling a binary with cargo rustc -- (..)
This broke std-with-cargo
The regression appeared somewhere between 06dbe65 (good) and 1777ab7 (bad).
Context: I'm cross compiling the std crate using cargo, and I've added a cargo feature jemalloc_dynamic to the std crate that let's you dynamically link to jemalloc. This feature depends on a feature of the alloc crate that has the same name. When the feature is enabled, cargo will cross compile a shared library version of jemalloc when building the alloc crate, and the final binary will be dynamically linked to jemalloc.
When cross compiling a binary that depends on std (but not on alloc) and jemalloc_dynamic is enabled, with the "good" cargo revision, this is the last command:
$ cargo rustc --target=mips-unknown-linux-gnu --verbose -- -C opt-level=0
(..)
Compiling hello v0.1.0 (file:///home/japaric/tmp/rust/src/hello)
Running `
rustc
src/main.rs
--crate-name hello
--crate-type bin
-g
-C opt-level=0
--out-dir /home/japaric/tmp/rust/src/hello/target/mips-unknown-linux-gnu/debug
--emit=dep-info,link
--target mips-unknown-linux-gnu
-C ar=mips-openwrt-linux-ar
-C linker=mips-openwrt-linux-gcc
-L dependency=/home/japaric/tmp/rust/src/hello/target/mips-unknown-linux-gnu/debug
-L dependency=/home/japaric/tmp/rust/src/hello/target/mips-unknown-linux-gnu/debug/deps
--extern std=/home/japaric/tmp/rust/src/hello/target/mips-unknown-linux-gnu/debug/deps/libstd-3d19f461db50ea76.rlib
-L native=/home/japaric/tmp/rust/src/hello/target/mips-unknown-linux-gnu/debug/build/alloc-ea35edb81fd92388/out
-L native=/home/japaric/tmp/rust/src/hello/target/mips-unknown-linux-gnu/debug/build/std-3d19f461db50ea76/out
`
But running the same command with the "bad" cargo revision results in a similar command with one less flag:
--extern std=/home/japaric/tmp/rust/src/hello/target/mips-unknown-linux-gnu/debug/deps/libstd-3d19f461db50ea76.rlib
- -L native=/home/japaric/tmp/rust/src/hello/target/mips-unknown-linux-gnu/debug/build/alloc-ea35edb81fd92388/out
-L native=/home/japaric/tmp/rust/src/hello/target/mips-unknown-linux-gnu/debug/build/std-3d19f461db50ea76/out
which then errors with:
note: (..)/mips-openwrt-linux-uclibc/bin/ld: cannot find -ljemalloc
Here's the full cargo log, if that helps.
The odd thing is that the flag is only missing when using cargo rustc -- $SOMETHING, both cargo rustc -- and cargo build work fine with the "bad" cargo revision.
cc @alexcrichton
cc @kteza1 for now, avoid today's or newer nightlies, or avoid using cargo rustc -- -C link-args=-s.
I so far haven't been able to reproduce this, but this may be connected to cross compilation. My attempts to cross to a 32-bit arch or android have failed (failures to build jemalloc). Do you know if there's an easy way to get the mips toolchain you're using? (I'm on ubuntu fwiw)
Do you know if there's an easy way to get the mips toolchain you're using?
Yes, it's in the OpenWRT SDK, here are the installation steps. Check that and the next section, you need to export the STAGING_DIR and PATH env variables to make the toolchain work.
|
GITHUB_ARCHIVE
|
[JDEV] Videoconferencing with jabber / Re:[speex-dev]Videoconferencing with speex and jabber
thoutbeckers at splendo.com
Sun Nov 30 18:28:15 CST 2003
On Sun, 30 Nov 2003 22:36:38 -0000, Richard Dobson <richard at dobson-i.net>
>> I had a larger reply to this, but somewhere it got lost.
>> Using a client/server model has no heavyer requirments than a p2p based
>> mode. Nor is it any complexer, but it will allow much easyer to
>> participate in client/server based conferencing.
> I still dispute your opinion on this, a client having to act as a server
> create more problems than it solves IMO (as already discussed), and IMO
> not really any easier to participate in.
Having one user assume the role as server, and one of client is really no
harder than a model in wich you asume both are equal peers. It's simply a
matter of different roles. If you can think of any reason why this is not
true, please share it with the rest of us!
However, using a client/model will allow you to participate in a
conference on a server with more people *with no extra effort at all*. Yet
you still state you don't believe it will be easyer?
>> I also think building p2p based conferincing into the protocol from day
>> one is unnecisarly complex, that should belong in an extention in any
> Have you changed your mind on the complexity? you say at the top of this
> email that its no complexor to implement p2p than it is client server,
> a bit of a contradiction.
No contradiction at all. I've never disputed that for 2 persons talking
over a direct link p2p is just as easy as a client server model. In fact
I've done quite the opposite, I've stated over and over it's not quite the
same thing, but neither one is complexer than the other. Read back my
previous posts and you'll see.
What I *am* saying, that an entirely p2p based conferencing model (with
more than 2 persons involved) is a lot more complex than a client/server
model. Even more so, if you only have to implement the client portion.
That's why this allows "thin" clients to still participate. It was you
yourself who argued against mixing and bandwith req. on thin clients such
as a pocket PC.
I think from the discussion it's pretty obvious what's needed/wanted most
are 2 things:
- person to person over a direct link
- conferencing with multiple persons on a server
This can both be handeled, without overlap, with a simple JEP based on a
c/s model. P2P won't cover this, nor will it be any simpeler.
Conferencing over induvidual direct links between persons is intresting
too, but too complex to be included in the basic JEP if you ask me.
Conferencing over direct links doesn't have to be p2p either. You can base
it on the c/s JEP with every induvidual participant acting as a server.
Not that more complex than doing this on a p2p based model.
>> It's odd though, that you completly put aside the arguments you tried to
>> make about 1 on 1 chat, when you talk about conferencing. Namely,
>> bandwith (in the case of 20 people talking this would be almost 10 times
>> as much as as c/s based) and lack of mixing capabilities (even if your
>> pocketpc does have that much bandwith it'll have to mix 20 channels!).
> Its just as bad for the client acting as the server (in bandwidth terms)
> it is to go p2p,
With conferencing the requirment of a (fast enough) server is way more
reasonable than for a person to person conversation (I completly agree
with you there a direct link should be used when possible!). However, by
going with a c2s model you'll still provide a fallback method for when a
direct link fails, by using a component that hosts a conference.
The total amount of bandwith used in a c/s conference is always smaller
than a conference based on direct links between all participants. For
obvious reasons ofcourse, I don't need to explain here.
Another difference is with c/s you'll need very little bandwith on all
machines, except for the server.
The server will require the same amount of bandwith as the peak requirment
of bandwith for a single p2p node. This is assuming you use silence
detection, else we're not talking about the peak requirment but just the
normal requirment. Ofcourse, silence detection can also be used in c/s,
but for the server it will be only half as effective as p2p (the server
will only benifit on incoming connections)
Basically this means that in a direct link based conference "weak" client
with limited bandwith, limited CPU will not be able to participate a
direct-link based conference (or will have a bad user experiance). Opposed
to this, if you want to do c/s conferencing, you'll need *1* server with
the same *peak* requirments as a single node (but it's the same for all
nodes) in a direct-link based conference, and generally around 50% more
bandwith usage on *average*.
So let's apply this to some real world situations. In how many cases are
all the clients have about the same available bandwith, CPU, etc. With Joe
Consumer this is unlikely.. it's a mix of dailup and broadband users. If
I'd want to talk to my mother, sister and brother at the same time, I have
a 1 mbit link, 1 will have a cheap DSL account, and the other 2 will be on
dailup most likely.
In a corporate enviroment having a dedicated component for conferences is
much more likely as in consumerland (for benefits mentioned already here),
and even if not, bandwith availabilty would generaly be high enough for
most users to host.
So what's a situation where all users have about the same specs concerning
bandwith and CPU, and there is no 1 machine that sticks out. Well, XBox
Live! ofcourse. All machines are identical, and broadband is required to
participate I think.
Again I don't think direct-link style conferncing is unintresting or
unneeded, but it's a much more specific application than c/s conferencing.
And *again*, a c/s style approach will not prevent this from being an
> also I would disbute that it would be 10 times as much
> bandwidth for the rest, adding silence detection (which you seem to have
> oddly put aside and ignored) reduces the p2p bandwidth use massively,
Hopefully I adressed this now to your liking.
> as I have shown previously the mixing requirements are less on p2p
> than on the "server client".
And how's that? When 4 people talk at once, *all* client will have to mix
4 streams in the case of direct links. In the case of c/s only the server
will have to mix 4 streams. Explain..
(only thing I could think of is if you want to create a seperate mix for
each client, without their own channel in it to prevent echo. Rather than
mixing new streams for each client you should just surpress echo for each
clients. Admitted, it increases demands on the server if you want this,
but not as bad as having to mix a new stream for each client)
> Also having a server client creates a single
> point of failure which you also seem to have completely put aside,
Yes, when the server quits the conference the other will get booted. If
this is a big issue for you, you could devise a fallback system to another
server (one of the clients for example) and still have a massivly less
complex system than direct-link based conferencing. Since servers are most
likely to be the best machines with the best connections this isn't such a
big problem, but it's still easily solved if you want.
When there are a few clients with bad connections in the conversation
reliability will probably improve a bit too. Bad connection <-> Good
connection <-> bad connection is generally more reliable than bad
connection <-> bad connection. Escp. when you consider bandwith usage
> there is
> also the latency issue that you have yet to address and until you do
> this satisfactorly
Latency is an intresting case, but in practise the results would probably
surprise you. Because on low-bandwith nodes to bandwith requirments
dramatically drop when they act as a client rather than a node in the
direct link conference, latency in many cases will actually improve in a
lot of cases! So you can have the situation where a node in a direct-link
conference with 3 persons talking is barely able to keep up, with horrible
latency. While a client with the exact same quality connection is enjoying
a conference where 6 people are talking with lower latency! (it wouldn't
even be able to participate when 6 people are talking in a direct link
Now lets talk about out-of-sync mixing. With direct-link based conferences
every client will produce a different "mix" based on the latency /
bandwith of their connections, and that of the other nodes. This means
when we're in a meeting, for me it can sound like 3 people were talking at
once, while for you it can sound like they didn't at all. (that means I
didn't hear what they said and I'll ask them to repeat, while you'll be
annoyed with me (even more ;) cause for you it sounded like I could have
Ofcourse there is a solution for this, syncing the mixes between nodes.
But then you loose all latentcy advantages, you'll be as slow as the
"weakest link". (and the weakest link will be a lot more stressed than it
would be in a c/s model). Ofcourse compromises are possible..
You'll always have problems with out of sync mixes if you don't do
something about it, but there are cases where it's less likely to occur or
just not so important. For example when it's only about a game anyway ;)
and all clients have about the same bandwith and CPU available.. :)
> I will not be convinced by your strange need to make this
> client server mode only, for which you still havent provided sufficent
I've presented many reasons for you. Maybe you don't agree with them (then
I wonder what you think of Jabber and it's client/server architecture),
but I'd appriciate it if you do not refer to them as "strange".
Escp. considering I didn't quite just make em up either, they are well
known issues with audioconferncing (hardly "strange" issues), and if you'd
have looked into it a little yourself you'd know that. (For example, I'm
not on the speex list, but way at the beginning of the discussion someone
already mentioned these thing have been discussed to death there, I guess
he didn't take the bait and I did ;)
More information about the JDev
|
OPCFW_CODE
|
I believe that we can learn to write code that naturally creates easy to read pull requests with small and concise differentials. Pull Requests are an important step in development and act as a final review of changes and give developers one last chance to collaborate, look for bugs, and consider improvements before a project is shipped. With some small changes in how we write code, we can make our diffs easier to read and understand. In this article, we’ll look at a few approaches to create easy to understand Pull Requests.
Don’t use Boolean flags
In the example above, it’s not clear from the diff what the PR is going to do. Let’s take a moment to see how we might have gotten to this point. In this first example, we are imagining a billing system that was written without consideration for sales tax. In this system we might have
SalesCalculator::calculateTotal(Cart $cart). At a later date we need to add a way to apply sales tax to the cart total. It would be easy to simply change this method signature to
SalesCalculator::calculateTotal(Cart $cart, bool $addSalesTax = false). This fix is expedient and in some ways seems like an ideal solution. We’ve made it simple and easy to indicate if we need to add sales tax to an order, and thanks to the default value of
$addSalesTax we don’t even have to change the any existing code. Here’s the diff for adding the new
Let’s assume that a month later we need to implement sales tax in another location. The diff would look like our first figure 1 above.
Anyone reading the diff has no context for the change — the diff in no way indicates what is changing. You simply can’t judge the change without going to the method definition and finding out what the boolean flag parameter is for. We would have to explain the change via comments in the PR, discussion, or by having the reviewer refer to documentation. We’ve wasted some time.
Imagine instead that we added a new method for calculating a total with tax. As an alternative, we could have changed the
SalesCalculator::calculateTotal(Cart $cart) to
SalesCalculator::calculateTotalWithoutTax(Cart $cart) and then added a new
SalesCalculator::calculateTotalWithTax(Cart $cart). With this approach, our PR review would have looked like figure 3.
The intent of the change in the PR is completely clear, and there is absolutely no confusion what this change set will do when it’s applied to your production branch.
We can get a similar benefit with another approach, and it’s one that is especially useful if you don’t have the ability to change the interface of a class. Let’s assume that we’ve got a third party class that has a
SalesCalculator::calculateTotal(Cart $cart, bool $addSalesTax) method that we must use. We’ll also assume that we are not using an adapter class to solve this problem. We could add new constants to specify the intent of the boolean.
This could give us a diff that is also very clear in intent!
Whitespaces and formatting changes make pull request diffs noisy and error prone. In the following screenshot, the PR would introduce a syntax error. You’ll notice that it’s quite hard to find.
If you lint all of your files with an established code style guideline you’ll be able make sure that all of your files meet a standard. In doing so, you will no longer suffer through individual developers enforcing their own preferred style onto a file as part of a diff. Instead, you’ll get uniform and easier to read code that has smaller tidier diffs. PHP Code Sniffer has scripts for both checking for violations and fixing them automatically.
If you must do some reformatting during a PR, there are a couple of useful tools that can make reviewing the code less error prone. Github offers a great shortcut to view a PR without whitespace changes highlighted. You can do the same via the command line with
git diff -w (--ignore-all-space).
Consider naming instead of commenting
Comments should document any unclear code. If the why of code is not clear, a comment should be added to clarify the intent of the code. However, this introduces two sources of truth. During code review, developers will have to read both comment and code and then ensure that they both match in intent. This confirmation step will slow down code review.
We can instead consider rewriting the code so that the why is more self-evident. For instance, you may be able to replace a comment with better naming or other changes and remove the need for the explanation via a comment. Doing so can allow the comment to be removed and prevent the comment and the code from potentially drifting apart over time. We should remember that comments have to be maintained too!
Let’s look at two diffs and consider which we’d rather review.
If we refactor the code so that the intent is clear without the comment, we can completely omit the comment and still have clarity. Any time that you prepare to write a comment, take a moment to consider if you could express the same idea by changing code instead.
Delete commented out code
Your source control system is your historical record of the code. You should ruthlessly delete unneeded code instead of commenting it out. I urge you to consider why you might be choosing to leave commented out code in your project. Imagine seeing the following diff and trying to decide why this change was made:
Any maintainer to come across this code later will be forever puzzled about it. Should it be deleted? Perhaps uncommented? Mysteries waste time.
Code is read more than it is edited and it should be written in a manner that makes it quick to read and understand. We can improve code even more by writing with future maintenance in mind. By considering what a diff will look like as we write code we can keep our diffs small and easy to understand. Small and easy to understand diffs will lead to your team shipping faster and with fewer bugs.
All of the screenshots in this article were done in iTerm2 with diff-so-fancy. The
diff-so-fancy tool is fantastic for creating human readable diffs on the command line. Don’t miss the pro-tips for setting it up with
less using pre-configured search patterns to make skipping through a diff very quick.
|
OPCFW_CODE
|
How to uninstall an uninstalled app from the App Store?
I tried to install Xcode from the App Store. While it was being downloaded, a network problem occurred so the installation didn't complete. However, in the App Store, it appears marked as installed (so now I can't install it).
I don't see any uninstall option. Is there any way to uninstall it from the backend or something like that?
Xcode is an odd one since in the old days, you ran an installer. Then they sent the installer over the App Store so deleting the installer was just that - not uninstalling. Now the app is handled like other App Store apps - trash it and it's gone. Furthermore, people often have two versions of the app. Below are answers for each case - but it really matters exactly which version of Xcode you have and whether you need to run the old style installers to clean up a second install or a partial install.
In the special case of Xcode, the App Store downloads an installer app that you use to install Xcode. Look in /Applications for "Install Xcode.app" and delete that.
When you get Xcode installed properly, if you delete the installer, the App Store won't think it's installed and won't display updates.
This one worked for me. My situation was Migration Assistant migrated an Xcode 3.x for me on top of an Xcode 4 installed from Mac App Store. I tried trashing Xcode.app, sudo /Developer/Library/uninstall-devtools –mode=all, logout-login to no avail. The "install [Xcode]" button was reactivated at last after trashing /Applications/Install Xcode.app.
Just throwing out some random suggestions:
First of all, remember to empty your trash.
Check ~/Music/iTunes and see if you can find it there. Try moving anything related to Xcode to the trash (and empty).
See if you have /Library/Developer. Maybe you can find any uninstaller there.
This is the first time I ever heard of installing Xcode with iTunes, but try the Xcode commandline uninstaller: sudo Xcode directory/Library/uninstall-devtools --mode=all. If the uninstaller isn't there, (because you say it's incomplete), just delete the folder (usually at /Developer) or reinstall by downloading the .dmg that can be found at http://developer.apple.com/.
Man... XCode is not actually installed, that's the problem (thus Xcode binary is not in my system). Yes, some time ago, Apple published XCode on iTunes ($4 dollars, which is good for someone like me who don't want to get into the Apple Developer program which cost $99 yearly). So... the problem here is that iTunes thinks XCode is installed, but it's not... so how can I tell iTunes that XCode is not installed at all?
Actually, @Cristian, you can download it for free from http://developer.apple.com
Man, I was not able... some months ago I installed Xcode 3.X (on other mac) for free from apple's web site. But now I can't find a link to download the .dmg (and that's why I tried to download it from App Store, not iTunes... it was a typo, sorry, I'm kind of tired XD).
XCode 3.6 is free. XCode 4.0 os not free. You do need a free developer account to download 3.6 (not the paid account).
XCode 4 is a payed app ($5, I think) and can be downloaded from the Mac App Store (not iTunes), or it can be downloaded for "free" if you are enrolled in a developer program (which costs $100/year).
The app store is marking this as installed because there's part of a .dmg or some other file type somewhere hiding. If you go to the 'Purchased' tab in the App store you should get the option to re-install once you remove this file.
The App Store downloads its files to a temp directory before moving the .app file to your Applications Folder. The directory is
/Users/<USERNAME>/Library/Application Support/AppStore/
Go to that directory and look for anything that says Xcode and get rid of it. Then redownload and install from the App Store's Purchased tab.
Had the same problem with "Pickr.app." and found a solution. My problem was that it didn't show up in Launchpad, but the Mac App Store said 'installed'.
Go to /Applicatons/or ~/Applications and manually delete your app, in my case "Pickr.app", then reinstall.
|
STACK_EXCHANGE
|
Changing a stack should be possible using previous parameter values.
When updating a stack, most of the parameters do not change and it can often be cumbersome to pass in all the same parameters that the previous stack uses, especially if it was randomly generated (e.g. a generated password for a DB instance).
The aws-cli provides a UsePreviousValue option: something similar would definitely be useful. The straight forward way would probably be to have a flag --reuse-parameters <Parameter1>,<Parameter2>,<Parameter3>....
Thoughts?
Yeah especially for passwords you only want to send in once but then keep around and not write to a config file this would be very helpful.
In general I mostly use config files to store parameters which makes it easy to pass the same ones in all the time, but for this case something else would be neat.
First thought is to make this a config file only option as it would be easier to implement and more understandable for those edge cases. I don't think I want to add another flag because that could lead to confusion when you set a value on the parameter, but then tell it to reuse the same parameter.
But I do like the feature. I'll probably not get around to it for a while, but happy to work on it if someone wants to pick it up.
In general, I'm not the biggest fan of the config file, I don't use them myself. Especially since 99% of the time, I will need to write a wrapper script anyway, it's more convenient to deploy an entire stack via the command line. Correct me if I'm wrong, but currently the config is completely optional as all features are available via the CLI, so this would introduce an inevitable need for a config file just for this option.
Not all features are available on the cli, e.g. nested vars can only be set in config files (this should be the only one at the moment). I'm typically creating config files for different environments (dev.config.yaml, staging.config.yaml) and then for shared options create a common.config.yaml. Formica allows you to load multiple config files and for them to override values, e.g. formica new -c common.config.yaml dev.config.yaml
I want the cli options to be for most common use cases, but I'm generally fine with moving some options that need more complex or nested config to config files. One thing I could think of here would be the following config file:
stack: somestack
parameters:
SomeParameter:
reuse: true
In general, I'm not the biggest fan of the config file, I don't use them myself. Especially since 99% of the time, I will need to write a wrapper script anyway, it's more convenient to deploy an entire stack via the command line
Would be interesting to know why you don't like the config files or need to write wrapper scripts. Really intending formica to be used by developers directly (or in some Makefile probably).
Would be interesting to know why you don't like the config files or need to write wrapper scripts. Really intending formica to be used by developers directly (or in some Makefile probably).
Most of the time, I need to lookup some ARN of some resource (e.g. certificate) and pass it as parameter. For this use case, however, I'm beginning to think it's more idiomatic to export reusable resources in a different stack so I can !ImportValue them.
Or for the "reuse", I do it "manually" with
aws cloudformation describe-stacks --output text \
--stack-name $STACK_NAME \
--query 'Stacks[0].Parameters[?ParameterKey==`VariableToReuse`].ParameterValue'
Also, the lack of an upsert command currently forces me to formica (change|new) variably. For the new command, I also automatically remove the stack on failure.
And since I have to do all these anyway, I don't see the value of having a config file in my repository. I might as well pass the few parameters via the command line.
For this use case, however, I'm beginning to think it's more idiomatic to export reusable resources in a different stack so I can !ImportValue them.
As long as they are in the same region then yes definitely export/import. There is one usecase when you want to use a certificate for a CloudFront distribution where the cert has to be in us-east-1 no matter where you create the CF distribution. For those cases I really think the config file (especially one for each environment) is a good tradeoff. You can store them in your repo and make it easy for anyone to come in and deploy.
And since I have to do all these anyway, I don't see the value of having a config file in my repository. I might as well pass the few parameters via the command line.
Most importantly here imho is that it documents the parameters and makes the config file usable by someone else on the team. CLI is great for quick things or when working on it only yourself, but it imho creates a situation where you could either have a typo and deploy something you don't want or somebody else doesn't know how to deploy a stack correctly.
Thats where upsert also becomes dangerous because a typo in a Parameter could mean an unstoppable change happening. With change/deploy you at least always have a step in between.
There is now (for a while) an option to use previous parameters, so I'm closing this. Thanks for reporting!
|
GITHUB_ARCHIVE
|
This week I created two new wizards for importing and exporting the ecore model. If you ever created an eclipse jface wizard you know there are a lot of things to do.
- Create the wizard class
- Create the wizard page and add them to the wizard
- Initialize the values of the wizard and it’s pages depending on selection
- Implement the code to update status message
- Defining when a page is complete
- Place the wizard page control
- Connect the page controls that depends on each other
- Remembering and Storing the settings from last wizard usages
- Setting the initial focus control
- Defining the perform finish code
Since these tasks are always the same for all wizards we decided to use our actifsource templates to generate the wizard classes. You may notice that some of the tasks above may be done by super classes. For example we can setup the layout manager, define the page complete state and collect the status messages from the page control interfaces. We also may set the focus to the control containing first undefined value. But all these tasks don’t require the concrete types of the values. When we come to the initialization, finish and settings management code, we have to start writing a concrete subclass for this wizard and these are the classes we generate. The templates automatically create the fields for accessing the concrete wizard pages, allowing access to the specific getters and setters not on the page interfaces. I also defined the dependencies of page controls in the model. This way the code for connecting the controls with each other is created by the generator. Code that has to be handwritten is placed in protected regions. It’s the code for initializing, storing the settings, updating the status message and performing the finish action, but this time I don’t have to create the things like field for accessing the wizard page and the initialization method for each page. I’m more or less forced to put the code in the right places. The structure of each wizard implementation is kept clean.
Now all we have to do is to create a wizard instance in the model and define the pages with their control and connect them. Afterwards the generator creates the whole skeleton and we simply have to fill out the gaps. If we decide to rename a wizard all class-, field- and method names coming from the model are changed by the generator.
Look at the model:
I defined a Wizard-Class which contains WizardPages containing inputfields (package-selection, filename-selection, etc.). Each inputfield has a name and uses a fieldeditor (page controls) that may define other fieldeditor as dependency. When the fieldeditor has a dependency you must define a dependency from that to an inputfield to another inputfield using the required fieldeditor. For example I can create a package-field which is restricted based on the selected a resource folder. After defining some name, titles and page descriptions I’m ready to go. Further work that may be done is writing templates for the fieldeditors.
Finally this is one of the generated wizards, each row label, textField and button is a fieldeditor (page control).
|
OPCFW_CODE
|
One version There are no longer distinct Personal and Enterprise versions of Timeless Time & Expense or Timeless Project Tracking 3.0. Instead, you are given a choice during installation of operating in Personal or Enterprise mode.
Invoicing A new invoice management feature has been added to version 3. Because of this change, the Invoice report is no longer available. When upgrading, invoice reports are converted to un-posted invoices. Invoices using a relative data range (Current Month, etc.) need to be edited to select specific dates.
Invoices are linked to specific work items. You can find the converted invoices by going to the Invoice pane and selecting the work item that was selected in the original invoice.
Shared Reports Shared reports have been changed in version 3.0. Shared reports are more flexible in what they can contain and who they may be shared with. For this reason, each report is owned by the person who created it. Since previous versions allowed anyone with Create Shared Report permissions to change a shared report, converted shared reports must be given an owner. During the conversion, the owner has been set to the built-in administrator, TTEAdmin. To change the owner, log in as TTEAdmin, right-click to shared report and select the Change Owner popup menu item.
Other changes In addition to the new features of version 3, some functionality has moved. The Task properties of version 2.6 can now be found on the Plan pane. With the addition of built-in reports, custom reports have moved to the Report pane. The ability to choose columns on the Track pane is now done by pressing the columns button on the individual tabs. System configuration items previously found on the Tools | Options menu item, and license information is now located under the Tools | Administration menu item.
Installing When installing, previous versions of Timeless Time & Expense are not removed. You can remove them later using the Add/Remove programs.
There are two versions of the Windows installation. One that includes SQL Server 2005 Express and one that doesn't. If you will be installing a new installation and want to install SQL Server 2005 Express with your installation, or are currently using Timeless MSDE and want to upgrade it to SQL Server Express 2005 you should download the installation that includes SQL Server 2005 Express.
If you are using MS Access or your own SQL Server installation, you should download the install without SQL Server 2005 Express.
Additionally, all client installs can use the version without SQL Server 2005 Express. Note - the smaller version can still install SQL Server 2005 Express, but it will be downloaded during installation, resulting in a longer installation process.
Once installed, you will start with a 30-day trial of version 3.
MS Access databases During installation you have the options to create or upgrade an existing Timeless Time & Expense Access database. During the upgrade, a new 3.0 database will be created. Your existing *.tmd database file will not be changed and can still be opened in the old version. Note - when upgrading a Timeless Access database to the new version 3 format, user passwords will be cleared. Be sure to have users reset their passwords after logging in the first time.
Timeless MSDE databases If you would like to upgrade your existing Timeless MSDE to SQL Server 2005 Express, run the installation on the machine running the Timeless MSDE database server. If not, you can upgrade the database during the installation on the first client install. When upgrading, the current database will be changed for version 3.0. While the installation should backup the database before upgrading it, we recommend you make a backup before starting. After the database has been upgraded it can not be opened in previous versions of Timeless Time & Expense.
SQL Server databases If your are upgrading a database on your own SQL Server, you can upgrade the database during the first client installation. When upgrading, the current database will be changed for version 3.0. While the installation should backup the database before upgrading it, we recommend you make a backup before starting. After the database has been upgraded it can not be opened in previous versions of Timeless Time & Expense.news & articles
|
OPCFW_CODE
|
We organize our projects by portfolios. It would be helpful to be able to have project templates automatically added to portfolios based on specific criteria (company name, department, etc.)
Thanks for your feedback, @Sara_Skowronski! This would be very helpful! I’ll let you know if I have any news about this feature in the future.
I am building a new template for my publishing team for creating reports. The template has quite a few tasks, 25 - 30. We’re creating probably ~60 projects per quarter.
Resourcing via calendar has gotten painful and I really want to start using workload. However, I am SHOOK that there seems to be no way to get a project created from a template automatically into a portfolio so we can use workload without projects being missed due to human error.
To clearly state my question: How can I have a project created from a template automatically added to a portfolio for use in workload?
I know I can auto add newly created tasks to another project automatically which could already exist in the portfolio, but I can’t build that into a template. I can also add all tasks from existing projects into a second larger project that is already in there, but still its manual and doesn’t address newly created projects.
Finally, at the scale we’re at, not being able to add projects to a template in any way other than one by one manually is frustrating.
If there are any workarounds I’m missing, I’d love to hear them, otherwise PLEASE PLEASE PLEASE make the option for teamwide automation or team workloads or template automation or any improvement here.
This would be so helpful! A lot of my team members forget to add projects to Portfolios once creating them.
How do we get it into development?
Ok after poking around a bit, I found what I was looking for! Not sure if this is helpful for you Sandy, but if you click the carrot next to the project name, you can add to Portfolio from there! It will also show what other Portfolios the project is in.
Curious – how long has that been available?
I am not sure! @Emily_Roman - do you have any idea here? It is super helpful and I had no idea it existed!
When a project template is added to a portfolio:
- As a template item, it should not appear in the portfolio
- I want this action to carry over when a project is created from this template.
That is, for each project from template, also add those projects to the portfolios indicated in the template.
To further remove the work about work, when I use the SEO template, I want that project to also be added to the SEO Portfolio. Only makes sense.
Hi @Getz_Pro, thanks for providing this feedback!
We do have an existing thread for this request in the #productfeedback category so I’ve gone ahead and merged your post with the existing one to consolidate feedback.
Hopefully this is something we can implement in the future I’ll keep you posted in the main thread.
Great point and I starting to see the urgent need to have this implemented.
The more we move with templates and automatations, the more we need these simple features to enhance efficinecies. Upvoted!
I honestly can’t figure out why I can save a project template to a portfolio, when I create projects from that template, it wouldn’t automatically add them to the portfolio – because it’s saved in the template. Why is it ignoring that setting of the template?
We use templates a lot for our projects. It would be awesome to be able to create a rule to add a project into a portfolio automatically.
If this is already a feature please let me know!
Welcome to the forum. This is a fantastic rule and we are asking for it.
There is an already open product feedback tab asking for this. Here is the link: Automatically add projects to Portfolios from project template
If you add your vote to the existing post, that would be wonderful.
This would be incredibly useful for us too! We need the ability to assign a project to a portfolio upon creation of that project using a template. So, the following two functions are greatly needed:
- Portfolio selection function when creating new project - When creating a new project (from a template or blank) use “Add to portfolio/s” function when setting the metadata for that project, so placement occurs at the creation stage rather than the two step process of allocating it to a portfolio/s after creation.
- Automation function for templates - so that for example all ‘Comms Plan’ template projects are automatically added to the ‘Comms Plan’ portfolio.
Hopefully this happens very soon as it would significantly enhance our capacity (as we’re creating 60-90+ of these types of projects monthly-quarterly atm).
I am struggling to find efficiencies with the Portfolio section. I thought I had found a solution in the “create new project” option in the “Add Project” drop down on the Portfolio level, however, when you go this route there’s no option to apply a template to the new project being created. I don’t really want to trade off manually adding each project to a portfolio with manually rebuilding all my custom fields, all my rules, and all my default tasks each time, either.
Can we add this to the “Create Project” screen of the Portfolio? If we’re not allowing projects to be assigned to a Portfolio on the project level then this seems an easy alternative.
Thanks for your feedback @Leslie_Irvine1! This is a great idea! We also have this request: Add a project to a portfolio, while I’m in the project itself in case you are interested to upvote! I’ll make sure to update this thread if I have news about this feature.
Yep, I had previously upvoted that feature as well.
|
OPCFW_CODE
|
#include <Adafruit_GFX.h>
#include <Adafruit_ST7735.h>
#include <SD.h>
#include <SPI.h>
// TFT display and SD card will share the hardware SPI interface.
// Hardware SPI pins are specific to the Arduino board type and
// cannot be remapped to alternate pins. For Arduino Uno,
// Duemilanove, etc., pin 11 = MOSI, pin 12 = MISO, pin 13 = SCK.
#define SD_CS 4 // Chip select line for SD card
#define TFT_CS 10 // Chip select line for TFT display
#define TFT_DC 8 // Data/command line for TFT
#define TFT_RST -1 // Reset line for TFT (or connect to +5V)
Adafruit_ST7735 tft = Adafruit_ST7735(TFT_CS, TFT_DC, TFT_RST);
#define BUTTON_NONE 0
#define BUTTON_DOWN 1
#define BUTTON_RIGHT 2
#define BUTTON_SELECT 3
#define BUTTON_UP 4
#define BUTTON_LEFT 5
int x = 10;
int y = 10;
void setup() {
x = 10;
y = 10;
tft.initR(INITR_REDTAB);
tft.fillScreen(0x0000);
drawScreen();
}
uint8_t readButton(void) {
float a = analogRead(3);
a *= 5.0;
a /= 1024.0;
if (a < 0.2) return BUTTON_DOWN;
if (a < 1.0) return BUTTON_RIGHT;
if (a < 1.5) return BUTTON_SELECT;
if (a < 2.0) return BUTTON_UP;
if (a < 3.2) return BUTTON_LEFT;
else return BUTTON_NONE;
}
void clearScreen(){
tft.fillRect(0, 0, 35, 16, ST7735_BLACK);
tft.drawFastHLine(0, y, tft.width(), ST7735_BLACK);
tft.drawFastVLine(x, 0, tft.height(), ST7735_BLACK);
}
void drawScreen(){
tft.drawFastHLine(0, y, tft.width(), ST7735_RED);
tft.drawFastVLine(x, 0, tft.height(), ST7735_BLUE);
tft.drawPixel(x,y, ST7735_GREEN);
tft.setCursor(0, 0);
tft.setTextColor(ST7735_WHITE);
tft.setTextSize(0);
tft.print("x: ");
tft.println(x);
tft.print("y: ");
tft.println(y);
}
void loop() {
int dx = 0;
int dy = 0;
uint8_t b = readButton();
if (b == BUTTON_DOWN)
dy++;
else if (b == BUTTON_LEFT)
dx--;
else if (b == BUTTON_UP)
dy--;
else if (b == BUTTON_RIGHT)
dx++;
else if (b == BUTTON_SELECT){
/**
tft.fillScreen(ST7735_BLACK);
tft.setRotation(tft.getRotation()+1);
drawScreen();
delay(500);
*/
tft.invertDisplay(true);
delay(100);
tft.invertDisplay(false);
delay(100);
tft.invertDisplay(true);
delay(250);
tft.invertDisplay(false);
return;
}
else
return;
clearScreen();
x += dx;
y += dy;
if(x > tft.width())
x = 0;
else if(x < 0)
x = tft.width();
if(y > tft.height())
y = 0;
else if(y < 0)
y = tft.height();
drawScreen();
}
|
STACK_EDU
|
Phased array minimum size
I want to know two things,how big does the phased array needs to be to form proper beam and what is the smallest area the beam can be focused.I learned little bit about lasers and with focal lense,the smallest spot that it can be focused is 1 wavelenght in theory,in practice about double.From loudspeaker theory I know that for driver to exhibit directional radiation it needs to be atleast 1 wavelenght big,significant beaming doesnt happen until the driver is 5 times the wavelenght.
But this isnt optics or acustics,how does this work for microwave phased array transmitters? I want to achieve small spot size,around 1cm and very low divergence,pencil like beam,but I also would like to use relatively lower frequencies like 2.4 GHz which have 12.5 cm wavelenght.If it was like speakers and lasers,smallest spot size would 12.5 cm,and the array would be too large,around 60 cm in diameter.
I know that dielectric materials slow down the speed of electric field,thus shortening the wavelenght without increasing frequency,is it possible to use this property to make small size array that is able to focus to spot size that is below 1 wavelenght in vacuum/air?
Dielectric materials shorten the wavelength, but it increases right back to the original value when the beam leaves the dielectric and goes back into air or vacuum. I don't think making a spot size smaller than 1/10 wavelength is very likely to be doable.
I am aware that is the case,but I am not sure if that prohibits focusing below 1 wavelenght in air if the antena is surrounded by high dielectric material.Even if the beam cant have less diameter or focus below 1 wavelenght,I believe it might help with the beam forming,so instead of large array I can use smaller one,the minimum spot size doesnt change,but maybe the effective array size changes.
Beam width, in the far field, is approximately Wavelength / (Diameter of the array). Same as for optics. So you can decide how small you want your beam. To avoid creating other beams, the so-called grating lobes, you need a filled array, with elements less than 1 wavelength apart.
How wide,and how many elements in row considering rectangular array shape would you recomend for 24 GHz? I want to send a tube like beam at distance of 10 meter,I would like the beam width be as close to 1 wavelenght.
If you're about to design 24 GHz antenna arrays, I'm afraid that I have to be very honest with you: you need to go and read a(better) book on antenna and antenna systems design.Your knowledge about loudspeaker design does help,but not much.Your question indicates you haven't really understood the basic math behind antennas and electromagnetic wave propagation,and 24 GHz is a frequency where a lot of the more subtle kinky details start to show,where even engineers with solid EM education start to wave their hands and leave it to the expert.Building 24GHz array feed system is challenging enough!
You'll need a parabolic dish, with aperature 100 or 200 wavelengths.
Just like lens apertures and loadspeaker arrays, a phased array antenna needs physical width to be able to focus a tight beam.
The tightest beam you can form is limited by the diffraction limit for that width, in the ballpark of wavelength/array_width. To get a 'reasonably' focussed beam, you will need an array width 'many' wavelengths.
I notice from the comments that you want to use 24GHz, which has a free-space wavelength of 12.5mm, which suggests that on the face of it, the physical width of the array you need is going to be the least of your problems. Then you want 1 wavelength width at 10m, or about 1mradian beam width. That's going to need an array width of the order of 1000 wavelengths or >12m, which is totally impractical (not that 24GHz is practical for an amateur). You may need to rethink your specifications.
24 GHz have wavelenght 1.25 cm,not 12.5,you are mistaking it for my earlier mention of 2.4 GHz,the wifi & bluetooth ism band.How many wavelenghts does the array need to be in order to get 2 or 3 wavelenght wide beam at distance of 10 meters? Is there online calculator so I can input frequency,array width and it shows the radiation patern? And what about my dielectric theory,can high permitivity dielectric slow down the waves,shortening their wavelenght so making the array act like bigger array with air dielectric so I can use smaller array for same narrow beam?
@wavscientist in my book, 12.5mm is 1.25cm. If you went for middle ground of perhaps 1ft aperture, that could achieve in the ballpark of a 1ft beam at 10m. If you fill the entire space between your antenna and your target with dielectric, say fill the room with oil, then yes, the reduced wavelength will reduce all the sizes. But if you have to pass through air, that's not much different from vacuum, and any dielectric loading within or on the antenna is not going to help beam-forming in air at all.
|
STACK_EXCHANGE
|
If I want my conlang's compound words not to exceed 3-4 syllables in length, what kind of phonology should my conlang have?
I've thought about using phonemic tones and permitting lots of clusters as ways of keeping my words short, but I don't want the syllables to be so heavy that every compound becomes a tongue-twister.
How should I go about making a happy medium between a phonology that generates only very heavy single syllable words vs. a phonology that is so simple that compounds must each comprise 6 or more syllables?
In case the future reader has not heard the term, 'phonotactics' is useful to look up here.
Related: https://conlang.stackexchange.com/questions/1608/how-does-one-go-about-designing-phonotactics-for-a-conlang
If you want fewer syllables per word, you'll want a larger number of possible syllables. (For a metaphor, think about how many letters vs how many kana vs how many kanji you need to represent a particular Japanese word. The more possible glyphs/syllables you have, the fewer of them you need to convey the same amount of information.)
Some good ways to do this:
Allow lots of different coda consonants.
Have lots of different vowels.
Allow clusters of two consonants in onsets or codas, instead of just one.
For example, if you start out allowing only CV syllables, and then you decide to add long vowels, that doubles the number of possible syllables. If you allow CVn instead of just CV, that doubles it again. If you allow sC instead of just C, that's another doubling…
This is why English has over ten times as many common syllables as Japanese (Oh's corpus analysis gives 6,949 vs 643 in the 20k most frequent words), and thus why English words consist of fewer syllables than Japanese ones. We have a whole lot of vowels, many possible codas, and very elaborate clusters in both onsets and codas (consider "strengths"). Japanese only allows two possible coda consonants (N and Q) and the only valid onset cluster is Cj.
why only mention coda consonants? Lots of distinct consonants will increase the number of syllables regardless (although having ones allowed in both onset and coda will do so fastest)
An example is some dialects of Inuktitut. A syllable is (C)V(/p t k q/) (ignoring sandhi in the final consonant). With 15 consonants, 3 vowels that can be long or short for 6 total, and the 4 consonants allowed in the coda, that's 360 potential single syllable words. Depending on the accents and such, English has upwards of 300,000 potential distinct one-syllable words.
@Tristan Mostly because coda consonants are almost always (always?) more restricted than onset consonants. Increasing the number of onset consonants, when there are already a lot of them, has less of an effect than increasing the number of coda consonants, when there aren't many. But of course you're right, more distinct consonants also gives more syllables.
|
STACK_EXCHANGE
|
At the University of Glasgow the Operating Systems (H) aims to introduce the students to the styles of coding required with an OS; to give a thorough presentation of the contents of a traditional OS, including the key abstractions; to show the range of algorithms and techniques available for specific OS problems, and the implications of selection specific algorithms for application behaviour; to develop an integrated understanding of what the computer is doing, from a non-naive view of hardware to the behaviour of multi-threaded application processes; present the alternatives and clarify the trade-offs that drive OS and hardware design.
At the University of Glasgow the Systems Programming (H) aims to introduce students to low-level systems programming. It focusses on programming in an unmanaged environment, where data layout matters, and where performance is critical. This might include operating systems kernels, device drivers, low-level networking code, or other areas where the software-machine interface becomes critical. The course uses a low-level systems programming language, for example C, to introduce these concepts. Students are expected to learn the basics of this language in a self-study manner prior to entry, however a review of the major concepts will be provided at the start of the course. This material is an essential prerequisite for the Operating Systems (H) and Networked Systems (H) courses.
At the University of Glasgow the Computing Science - 1S Systems course introduces the fundamentals of computer systems, including representation of information, digital circuits, processor organisation, machine language, and the relation between hardware and software systems.
At the University of Glasgow the Professional Software Development course introduces students to modern software development methods and techniques for building and maintaining large systems and prepares students to apply these methods and techniques presented to them in the context of an extended group-based software development third year Team Project. This aims to make the students aware of the professional, social and ethical dimensions of software development; and instil in the students a professional attitude towards software development.
At Glasgow Caledonian University the DevOps module provides third year students with an overview of modern practises in Version Control, Configuration Management, Quality Assurance, Continuous Integration and Delivery as well as all other aspects of DevOps including culture and history.
Integrated project 1
At Glasgow Caledonian University the IP1 module is the first year team project for the GLA program where Apprentices have the opportunity to work on a project that is relevant to their company and is approved by their managers.
Hardware Software Interface
At Heriot Watt University the F28HS module covers C, Assembly, Systems and Hardware interaction utilising Raspberry Pis to demonstrate interfacing to hardware and low level programming languages.
University of Glasgow. This introduction to computing covers a broad variety of materials to prepare students for their first year at the University of Glasgow and encourage them to select Computing Science. The topics addressed as introduction to programming (Python) and introduction to logic gates, hardware, ethics and general principles of the discipline.
|
OPCFW_CODE
|
Okay, so it's not technically miniature gaming, but since (1) the Iron Kingdoms RPG is based entirely off the Warmachine/Hordes games; and (2) the combat system really calls for miniatures; and (3) it's my blog, and I'll blog what I want to, blog what I want to. . . sorry, flashback to an old song there.
Anyway, the first thing is the character creation mechanic. Chose an archetype, choose a race, choose two careers, and you are basically done. Making a new character is a fairly simple process, and it's a good thing, since combat can be quite vicious, what with mighty characters swinging around weapons, boosting attack and damage, and all that. You don't have many circles in the life spiral (damage track, just like in Hordes, and very similar to the damage grid in Warmachine), and one good hit can really ruin your night.
Combat is pretty simple as well, though I say this coming from a WM/H background, plus a really extensive RPG background. IKRPG combat is at least a couple of orders of magnitude easier than Chartmaster (I mean, Rolemaster) or Hackmaster.
So, I like the game. I like the background, I like the core system. . . I just didn't care too much for the intro adventure. But, it's an intro adventure, so really, I should not ask too much of it, right?
Ran the intro on Friday for two guys from my normal RPG group and two guys from my WM/H group on Saturdays. (Actually, both groups are small enough that they really aren't "groups," but I can't think of any good word for a gathering that small.) My RPG guys had played the adventure before, so I tried to ensure that the WM/H guys got to chose characters, take the lead, etc. Not always as successful as I would have hoped, but they seemed to enjoy it.
It seemed to me that everyone got along well enough as well. Granted, only one session, past performance is not indicative of future results, your mileage may very, ask a paid professional, do not attempt, blah blah blah boilerplate. Anyway, I think everyone wants to make a pirate crew, and go from there. I hope so - I have been jotting down some ideas, and looking through fluff (in the TT game and in both editions of the RPG), so hopefully everyone actually does want to play.
More later, but it's been a long day, and since I have blown out the jack o'lantern and turned off the porch light, maybe now the little extortionists-in-training will leave in peace for a bit.
|
OPCFW_CODE
|
no signal with dvi-i to vga adapter while trying to configure dual display (debian 9 lxde)
I have 2 monitors, one connected with a hdmi cable to my pc and one connected with a vga cable and a dvi-i to vga adapter, plugged in the dvi-i port of my gpu.
Under Windows 7, it works perfectly but under Debian 9 (lxde), i dont get an output.
xrandr gives me only my hdmi connection.
In Menu ⇒ Settings ⇒ Monitor Settings is only the hdmi monitor displayed.
My graphic card is amd radeon r9 270, drivers are installed.
I have tested this setup with other desktop environments either but with no success.
The second screen works during boot just when lxde starts, it looses the signal.
To buy an other hdmi cable is no solution because my gpu has only one hdmi port (and 1 dvi-i, 1 dvi-d and a display port). Also a dvi to hdmi cable didnt work (pc - dvi - hdmi - monitor)
My target is to extend my desktop.
EDIT kemotep:
I have downloaded my gpu driver here: AMD Radeon™ R9 270 Previous Drivers and executed the file after download.
My package manager (synaptic) tells me that the drivers are installed (I have searched for radeon)
I haven't configured my display because debian doesn't discover it.
"and yes, I have asked Google..." yes, there are thousand guides out there how to setup dual monitors. Most of them did not faced my problem. Under linux, my 2nd monitor is not displayed at all.
K7AAY:
It is not the hardware's fault because my monitors are working fine with win7. Its anything with linux...
Welcome to the the Unix and Linux stack exchange! Please review the Help Center to get information on how to best post to the site. Take the Tour if you are not familiar with how this site works. To get to your question, could you outline the exact steps you took to install your drivers, and configure your display? Have you looked up how to setup dual monitor under LXDE specifically? If so link the guide. Please edit your post to include these details. Thank you!
updated @kemotep . edit - i was at stackoverflow before so im not completely new
I know this may be frustrating but you need to not install your graphics drivers that way when you use Debian. Please remove all of your graphics drivers and settings, in some cases a reinstall of your operating system will be cleaner and faster, and install your graphics drivers following this information. After you have done that please edit your post to include the output of lscpi -nn | grep vga and xrandr. Confirm that your monitor is connected and powered on. Settings > Display Settings should detect the 2nd monitor as soon as you plug it in.
it returns command not found. but this doesnt matter after ive installed the drivers anr rebooted i had a 2nd monitor by default. while booting, there where any error messages that amd doesnt find anything but its working. never change a running system. thx :P
sorry that should have been lspci not lscpi that is my mistake. The best way to manage your Debian system and have the most "stable" experience is to only install and update packages through the Debian package manager apt and only use the official repositories for your given Debian release. I am glad to hear everything is working now.
|
STACK_EXCHANGE
|
I've talked with 3 WASM related startups today and every devtool investor seems to be looking for a WASM play.
Everyone seems to have strong conviction that the future will be safer/faster/polyglot if we just rewrote all the things to WASM.
no i have not. the sense i get from people is its a good proof of concept but nobody's really learning it to compile to wasm.
Good take on js. More curious if typescript to wasm will take off. Given you already have a compiler, and a damn fine one at that, you could get some wicked wins & even leave backwards compatibility modes
I have used it and reallllly dig it. The biggest problem is the same as most targets, if you need a garbage collector it's tough. Theirs is really robust but it's still a hit. Because of that I prefer rust, but it's impossible to debug anything without sourcemaps which asmscr has
I dug in myself to be able to tell what's real/call BS, at least a little bit.
From there, I start with my usual question: "what is this *not* good for?" then probe the specifics of what we're looking to do.
Helps that what we're looking to do is narrow.
Hmm, won't that lead to even more consolidation of power and influence in the big 3 browsers? One of which I have in mind is already trying to perform manifest destiny on the web and succeeding. WASM is awesome tech, I wonder though if we're actually ready for it
I might take the other side of the bet:
That new Typescript-native runtimes and deployment targets get so good (and can seamlessly call into WASM when really needed) that the answer for 95% of use cases remains “why bother?”
Yes! The server-side Wasm ecosystem should include startups & big companies. Both have strengths. Right now, a lot of Wasm talent is at big co's like Google, Shopify & Fastly. Some of these engineers will need to leave to build out the startup ecosystem more. That's the hope. 🙂
Everything sort of hinges on web IDL bindings. Right now wasm is only faster some of the time, and it’s definitely not faster for low-CPU tasks like DOM manipulation. But this is because it’s not a fair fight—web IDL bindings would make wasm’s dominance almost inevitable.
I think if you look at enterprise users you'll see a lot(!) of WASM via Blazor, obviously that doesn't require anyone to actually understand WASM at all and the tools are already taken care of. I agree server side WASM will be more interesting (fast + portable).
If WASM+WASI existed in 2008, we wouldn't have needed to created Docker. That's how important it is. Webassembly on the server is the future of computing. A standardized system interface was the missing link. Let's hope WASI is up to the task!
It’s not because you (re)write it in WASM that it will be faster. This is a misconception. WASM is an extension of JS, not a replacement. @elmd_ and I gave a workshop on WebWorkers + WebAssembly and that was also our takeaway for the attendees.
Maybe in the far future, but I've tried WASM (in production) more than once, sometimes compiled from C, sometimes from Rust, and it's an absolute nightmare to interop correctly, on all browsers and runtimes. Just importing it is very cumbersome. I gave up on WASM.
Concrete example: Rust is about fearless concurrency (e.g. with rayon) and wasm-bindgen-rayon simply doesn't work on Firefox. And it may work on Chrome (after you fight the super convoluted way of importing wasm) but if you have a single runtime error it's very weird to debug.
I've realized that unless you're building something really CPU or GPU intensive, like games, there is not much you gain from WASM. JS on the other hand runs *everywhere* without compilation (this is extremely underrated benefit), is easy to use and debug when it fails in runtime.
The main advantages of wasm re:
programming language interop
portability to different architectures and platforms
With arm, risc-v, x64, iOS, macos, windows, Linux and android cross-compiling and platform incompatibilities are becoming a nightmare
I say do not rewrite everything with WASM. Find use cases for it and use it pragmatically. JS is really fast and runtimes like V8 optimize JS really well and WASM is not a silver bullet for performance. Polyglot sounds like the right direction.
once kotlin has js + wasm interop, i'm just gonna start writing almost everything in svelte and kotlin
like, imagine a vite/webpack/rollup plugin to import *.kt files into js, plus auto-generated typedefs
wasm is delivering what java promised in the 90s and early 2000s, yet there is less hype
I got laughed at the other day when I told our embedded devs that in 5 years we would be compiling our code to wasm instead of x64 (windows and Darwin) and arm7 (Darwin and Linux)
If we replace JS with WASM then web development will become a field I don't want to be in. Currently I have to know a ton of frameworks and tools (BE and FE). If we add WASM I have to know a ton of frameworks and tools for a dozen of languages and the languages themselves...
|
OPCFW_CODE
|
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU (Tesla) • DeepStream Version 6.1.1. • JetPack Version (valid for Jetson only) NA • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs)
We are building a custom Deepstream application with YOLO, Tracker, and Slow Fast models. However, we wanted to build a custom Tracker and a custom Slow Fast model instead of using the existing models in the Deepstream.
Could you please help me answer the following questions in regard to the above problem statement:
Can an output of the probe be used as input for the GST element?
Does implementing a custom probe result in any reduction in the latency? or usage of GPU capabilities that the Deepstream provides?
Can we have multiple probes and use the first probe output for the next probe?
We are developing in a Python environment.
Could you please share if there are any alternate ways to implement the custom models? Any resources are appreciated.
What do you mean by the “output of the probe”? The pad probe functions are just callbacks evoked by some specific pad state. GstPad, GstPad.
No. Some probe types will block pads. GstPad Even with the non-blocking probe types, the processing in the callback will not be accelerated.
Yes but not necessary. The probe type is a mask. GstPad
Please make sure you are familiar with GStreamer knowledge and coding skills before you start with DeepStream. If you want to use python, please make sure you are familiar with gst-python too. Python GStreamer Tutorial (brettviren.github.io) This is DeepStream forum. We will focus on DeepStream here.
What do you mean by the “output of the probe”? The pad probe functions are just callbacks evoked by some specific pad state. [GstPad]
The output of the probe - In the probe function, we are doing some processing on the frames.
Now, we would like to access the above processed frames in the Gst pipeline. For example, we would like to stream the processed frames to the RTSP sink, that we created as a sink element in the pipeline. Is this feasible?
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
So you want to change the content of the GstBuffer. Yes, you can. But there are some limitations. You can not do any time consuming processing in the probe function which will hold the GstBuffer and block the whole pipeline. And you can not do any processing which will impact caps Caps (gstreamer.freedesktop.org). These are basic GStreamer knowledge and coding skills. Please refer to GStreamer sources.
The video data in the CUDA based HW buffer NvBufSurface is attached in GstBuffer. There is already lots of samples of how to get NvBufSurface from GstBuffer. Please refer to the samples in /opt/nvidia/deepstream/deepstream/sources
Please study GStreamer by yourself. This is DeepStream forum. We will focus on DeepStream here.
|
OPCFW_CODE
|
Wave merger software program is designed for merging and including multiple wav audio recordsdata for creating one massive sound file without degrading sound high quality. MP3 Cutter is without doubt one of the greatest free mp3 cutter apps for android users. On this app you possibly can simply create any ringtone of your curiosity. Select any of your favourite MP3 from your cellphone or recording, choose the world to be shred from the audio, reserve it on your phone. Use this as ring, music, alarm, or Notification. Simply preview and play all of the listing of output ringtones. You may simply manage your ringtone file as edit, delete, set as ringtone, notification or others. Simply share your edited MP3 to your friends, families and others.
MP3 Splitter & Joiner is ready to be a part of and split large MP3 information up to 2GB (about 2000 minutes). Ideas: It's possible you'll use Ctrl+A combination key to select all WAV recordsdata you want to merge. Some software program could have this feature, e.g. an audio modifying software program. Nonetheless, doing it may take some work. If you wish to merge greater than three songs, the operation becomes much more troublesome. This page offers a simple program that may rapidly and easily merge multiple Mp3 Cutter Merger recordsdata.
However there are good causes to maintain downloading music, chief of all being which you can't own the music that you simply stream. Bandwidth considerations are one other, which is why plenty of customers nonetheless obtain YouTube movies as MP3s. Click "Merge into one File" field and faucet "Convert" when you may have chosen the audio format to the consolidated audio file. After profitable conversion, click on "Open Folder" to search out the combined audio file.
Choose the audio format to which you'd like to convert the consolidated audio file. Click "Convert" to avoid wasting adjustments. Utilizing Freemake Audio Converter, you'll convert WMA to MP3 and different audio formats as properly. Verify IconIt does not only handle MP3 formated audio but additionally greater than 15 other audio codecs. This free MP3 joiner helps a large amount of input audio formats together with MP3, WMA, WAV, AAC, FLAC, OGG, APE, AC3, AIFF, MP2, M4A, CDA, VOX, RA, RAM, TTA and rather more as supply formats. Any audio files and audiobooks can be joined to the preferred audio codecs as MP3, OGG, WMA, WAV, and so forth.
Audio Cutter Joiner is a powerful audio editor, which accommodates audio splitter and audio joiner in a single program. Click Add Files to open the Select recordsdata to merge window shown straight under. On this guide, you will discover ways to mix 2 audio information into one on-line and how one can merge audio files offline with the most effective audio merger software program.
For such functions, you might use an all-round audio modifying freeware program like Audacity, however that is not the most handy or environment friendly way. Your finest guess might be to make use of a smaller, more particular program for the jobs: a light-weight freeware splitter or joiner. MixPad is a free music mixing app for computer systems (Home windows, Mac) that lets you edit, reduce, manage, add results to your audio files. You may add multiple information in the MixPad timeline, or report audio. It's really easy, you do not want any additional data to use this software program.
Batch audio converting software program to convert multiple audio information, like MP3 to WAV, OGG to MP3, WMA to M4R, and so on. Energy MP3 Cutter and Joiner is indeed a strong device that helps you split and merge MP3 audio recordsdata effortlessly. It is simple to use MP3 editor and creates one software program with the audio splitter and merger. Apart from MP3 it additionally helps many other file formats like WAV, MP3 Cutter Merger OGG and WMA.
Step 1: Open it and drag the WAV audio recordsdata you need to merge into Audacity. After that, yow will discover multiple music tracks are listed on the interface. • Audio Bitrate Changer: With Timbre, you may quickly compress your mp3 or m4a files and pick a customized bitrate. Join Multiple WAV Information Into One Software permits for little or no tinkering, so all the course of is principally run based mostly on default settings, your contribution begin diminished to a minimal, particularly the file order and the output location, a characteristic which can not sit well with some.
It appeared like Merge MP3 may work, and the truth that it doesn't re-encode was a plus for low bit fee streams. Moreover, the program was used a couple of times prior to now. Join an unlimited number of video files. Audio Joiner also consists of Crossfade and Fade-out transition impact buttons on the precise of tracks. You possibly can click these buttons to toggle their effects off or on.
Mix many separate music tracks right into a non-cease one to create audio CD. Click on on the Add Videos button after which you can add audio information into the free audio converter. Choose and check recordsdata you'd prefer to merge. Leawo Video Converter , as mentioned above, could carry out as an expert audio file merger to merge a number of audio recordsdata into one file. It has no limitation on the supply audio file measurement. It is simple to make use of and fairly practical in audio merging.
Begin WAV Joiner. Download the trial copy of WAV Joiner. It includes all options of the registered full model so that you can strive except that it only combines the first two information in the record. When you've got installed it, join wave meeting via wave web portal follow the step by step instructions in the WAV Joiner > Fast Begin Information which you can see in your Home windows Start Menu, for an easy method to get going.
With this merger, in addition to WAV and other audio files, you may also merge video information For examples, http://www.magicaudiotools.com/ you may be part of MP4 , AVI recordsdata and so forth with simple steps. And earlier than clicking the Run" button, you'll be able to normalize audio volume and mute the sound on Setting interface. Add your mp3 information, than click "merge" button to merge.
Want to merge a number of MP3s into one file? Look no further… Our Free Merge MP3 is your remaining station. It may well assist you merge a large number of audio files with different formats into one larger file with one format equivalent to MP3, WAV, WMA and OGG without trouble. With this powerful software, you can even combine many separate music tracks into one non-cease audio CD.
Thanks to your help but I believe you misunderstood what I am trying to do. I'll not have defined it properly enough. I don't need to take two 2 hour recordsdata and be part of them end to finish to create a single four hour file. I need to combine them collectively in place as if routing the 2 to the identical mono output and create one 2 hour file. I've asked fairly a couple of people and there would not appear to be a approach. Maybe there is a plug-in? Quicktime will do it but only at sixteen bit.
Simple MP3 Cutter Joiner Editor is totally a fantastically extraordinary program. Differing from different comparable MP3 helper, it comes with a vast variety of functions to fit for attainable requirements. Its built-in audio player permits you to preview the output audio file in advance as a way to examine whether it matches your expectation. Full-purposeful as it is, you're entitled to chop, merge, cut up, mix and edit the designated audio information effortlessly.
The one other finest audio modifying app for your Android that allows you to edit your media in response to your wish. Media Converter allows you to convert all kinds of media formats to common media codecs: mp3, mp4 mpeg4, aac, Ogg, Avi (mpeg4, mp3), MPEG (mpeg1, mp2), Flv (Flv, mp3) and WAV. Additionally, audio profiles: m4a (aac-audio solely), 3ga (aac-audio only), OGA (FLAC-audio solely) are available for comfort.
So you may mix audio information with the Command Immediate, Audacity, MP3 Merger software program and Audio Joiner net app. You may manually choose particular person information or batch be a part of total folders. Just drop any variety of information right into a folder, then specify that folder in MP3 Joiner to supply a single mp3 audio file.
three. Click on merge button to start joining WAV file. It updates and writes the vbr headers of the vacation spot MP3 recordsdata if possible. so, the joined file really is >5GB on-disk, but reporting as a lot a lot shorter. now, I am effectively conscious there's a limitation of wav recordsdata: the RIFF header size is barely a 32-bit quantity, so the enjoying time is successfully diminished modulo 4GB. I imagine that is what I'm hitting above.
Step 1 Add MP3 information you wish to be part of to this MP3 Joiner - merely drag and drop your mp3 files to the primary interface of this system. Available to be used with devices which have been installed with Home windows operating programs, Free Video Cutter Joiner will work with Home windows Vista, Windows XP and Windows 7, 8 and 10. Nevertheless, it has no functionality with other frequent working programs, equivalent to Android and iOS.
Free MP3 Cutter Joiner permits you to lower and join information with elevated precision and without affecting their quality. The appliance means that you can minimize giant MP3 recordsdata into smaller items with a purpose to transfer them to numerous gadgets. Furthermore, this system comes a with an built-in participant which can let you pre-pay attention your MP3 information.
|
OPCFW_CODE
|
GitLab Continuous Integration(CI) -1
GitLab Continuous Integration(CI)
If you add a .gitlab-ci.yml file to the root directory of your repository, and configure your Gitlab project to use a Runner, then each merge request or push trigger your CI pipeline.
The .gitlab-ci.yml file tells the Gitlab runner what to do. By default it runs a pipeline with threes stages: build, test, and deploy. You don’t need to use all three stages; stages with no jobs are simply ignored.
If everything runs ok (no non-zero return values), you will get a nice green checkmark associated with the pushed commit or merge request.
Most projects use Gitlab’s CI service to run the test suite so that developers get immediate feedback if they broken something.
There’s growing trend to use continuous delivery and continuous deployment to automatically deploy tested code to staging and production environments.
so in brief, the steps needed to have a working CI can be summed up to:
Add .gitlab-ci.yml to the root directory of your repository
Configure a Runner
From there on, on every push to your Git resposity, the Runner will automagically start the pipeline and the pipeline will appear under there projects’ /pipelines page.
This guide assumes that you:
have a working Gitlab instance of version 8.0 or higher or are using GitLab.com
have a project in gitLab that you would like to use CI for
Let’s break it down to pieces and work on solving the GitLab CI puzzle.
Creating a .gitlab-ci.yml file
What is .gitlab-ci.yml
The .gitlab-ci.yml file is where you configure what CI does with your project. It lives in the root of your repository.
On any push to your repository, GitLab will look for the .gitlab-ci.yml file and start builds on Runners according to the contents of the file, for that commit.
Because .gitlab-ci.yml is in the repository and is version controlled, old versions still build successfully, forks can easily make use of CI, branches can have different pipelines and jobs, and you have a single source of truth for CI.
Note: .gitlab-ci.yml is a YAML file so you have pay extra attention to indentation. Always use spaces, not tabs.
If you want to check whether your .gitlab-ci.yml file is valid, there is a Lint tool under the page ?ci/lint of your GitLab instance. You can also find there link under Settings > CI settings in your project.
For more information and a complete .gitlab-ci.yml syntax, please read http://doc.gitlab.com/ce/ci/yaml/README.html
Push .gitlab-ci.yml to GitLab
Once you’ve created .gitlab-ci.yml, you should add it to your repository and push it to GitLab.
git add .gitlab-ci.yml
git commit -m “Add .gitlab-ci.yml”
git push origin master
Now if you go to the Pipelines page you will see that pipeline is pending.
You can also go to the Commits page and notice the little clock icon next to the commit SHA.
Clicking on the rock you will be directed to the builds page for that specific commit.
Notice that there are two jobs pending which are named after what we wrote in .gitlab-ci.yml. The red triangle indicates that there is no Runner configured yet for these builds.
The next step is to configure a Runner so that it picks the pending builds.
In GitLab, Runners run the builds that you define in .gitlab-ci.yml. A Runner can be a virtual machine, a VPS, a bare-metal machine, a docker container or even a cluster of containers. GitLad and the Runners communicate through an API, so the only requirement is that the Runner’s machine has Internet access.
A Runner can be specific to a certain project or serve multiple projects in GitLab. If ti serves all projects it’s called a Shared Runner.
Find more information about Runner from this link : http://doc.gitlab.com/ce/ci/runners/README.html
GitLab Continuous Integration Server
ssh gci.xq.cn -l james
Install Docker runner:
curl -sSL https://get.docker.com/ | sh
Error reporting -
curl: (6) Could not resolve host: get.docker.com; Unknown error
Generated by NetworkManager
No nameservers found; try putting DNS servers into your
ifcfg files in /etc/sysconfig/network-scripts like so:
Add nameserver 220.127.116.11 to resolv.conf
Execute : curl -sSL https://get.docker.com/ | sh
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-ci-multi-runner/script.rpm.sh | sudo bash
sudo wget -O /usr/local/bin/gitlab-ci-multi-runner https://gitlab-ci-multi-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-ci-multi-runner-linux-386
创建项目 -》 Setting -》Runner
How to setup a new project specific runner
Install GitLab Runner software. Checkout the GitLab Runner section to install it
Specify following URL during runner setup: http://git.xq.cn/ci
Use the following registration token during setup: St8Crygwy6sc75jqqw-y
- sudo composer global require "laravel/installer"
- touch test.php
在执行脚本之前,先去安装 laravel 框架。然后,再去执行工作 init_jog,这个工作也就是简单创建一个 test.php 脚本 。
git push origin master 之后,
到 GitLab 项目管理页面,查看 “Builds” ,
如何检查 yml 文件的语法
GitLab CI Issues
Build always pending
Summary: GitLab CI does not start building of jobs, it is always pending.
Steps to reproduce: Install GitLab CI 7-10-stable, add a runner, add a repo, commit to the repo
Expected behavior: GitLab CI should trigger the runner and the runner should start to build
Observed behavior GitLab CI recognizes the commit and says build is pending, but does not trigger the runner. It says, that the last contact to the runner was never.
Relevant logs and/or screenshots: Every 5 Seconds in production.log: Started POST "/api/v1/builds/register.json" for 127.0.0.1 at 2015-04-27 05:08:48 +0200
|
OPCFW_CODE
|
Recently, installed Windows 95 on my system, I also installed a larger hard drive, and upgraded the processor from a 486DX2 66Mhz to a AMD 586 133mhz processor.I also have an IDE CD-ROM drive connected as a "slave" to the hard drive which is 2.1GB drive. Windows 95 was the upgrade from version 3.1 when I installed it. Windows 95 and Netscape 32 bit version worked before I upgraded.
This is the problem I am experiencing, I can run Netscape 3.2 16-bit version, but not the 32bit version of Netscape. My system comes back with this message when I try to run any 32 bit version of Netscape
NETSCAPE executed an invalid instruction in
module MSVCRT40.DLL at 0137:1023b3c1.
EAX=00940e7f CS=0137 EIP=1023b3c1 EFLGS=00010202
EBX=00945eb8 SS=013f ESP=0091efe8 EBP=0091eff4
ECX=0091f0a8 DS=013f ESI=0094a0f0 FS=196f
EDX=00000000 ES=013f EDI=00945f54 GS=0000
Bytes at CS:EIP:
df 7d f4 d9 6d fe 8b 45 f4 8b 55 f8 c9 c3 cc d9
00945eb8 0091f020 027f0e7f 0091f048 005c9de1 00945f54 0094a0f0 00945eb8
00000001 0094a4d8 0094a4ac 0094a554 00945f54 0094a0f0 00945eb8 0091f044
I consistently am getting the same error message at the same memory addresses.
I have tried to re-install different versions of Netscape, and re-installed Windows 95 at least 5 times. What am I doing wrong? I have already replaced msvcrt40.dll in the windows/system directory with the latest version, and still get this message. Is there a conflict in memory or with the processor?, am I using the correct version of msvcrt40.dll? Is my system's bios set correctly? How can I more closely diagnose this problem? Windows 95 says there are no conflicts with hardware in Win 95 device manager. What exaxtly does msvcrt40.dll do?
Since the hard drive and processor were upgraded at the same time and that's when the problems started, I am not sure which one of these could be causing this. I had to change to Logical Block Addressing in the system's bios to get Windows 95 upgrade to install. Running Windows 3.1, the swap file would become corrupt at every boot with LBA on, so I set it up without one, before installing Windows 95.
Any help would be greatly appreciated.
|
OPCFW_CODE
|
import {
assert,
assertEquals,
assertThrows
} from "https://deno.land/std/testing/asserts.ts";
import { test } from "https://deno.land/std/testing/mod.ts";
import { v4 } from "https://deno.land/std/uuid/mod.ts";
import Mutex from "./mod.ts";
test({
name: `can be acquired asynchronously and released`,
fn: async (): Promise<void> => {
const mutex = new Mutex();
const id = await mutex.acquire();
mutex.release(id);
}
});
test({
name: `aquired mutex returns a valid UUID identifier`,
fn: async (): Promise<void> => {
const mutex = new Mutex();
const id = await mutex.acquire();
assert(v4.validate(id));
mutex.release(id);
}
});
test({
name: `throws if released while unacquired`,
fn: async (): Promise<void> => {
const mutex = new Mutex();
assertThrows(() => mutex.release(``));
}
});
test({
name: `throws if mutex id doesn't match current mutex holder`,
fn: async (): Promise<void> => {
const mutex = new Mutex();
const id = await mutex.acquire();
assertThrows(() => mutex.release(`abc123`));
mutex.release(id);
}
});
test(`blocks async code that has not acquired the mutex`, async () => {
let mutex = new Mutex();
let semaphore = 1;
const testSemaphore = async () => {
const mutexId = await mutex.acquire();
assertEquals(semaphore, 1);
semaphore--;
await Promise.resolve();
assertEquals(semaphore, 0);
semaphore++;
mutex.release(mutexId);
};
await Promise.all([testSemaphore(), testSemaphore()]);
});
test(`blocks data object access while fetching data to modify the data object`, async () => {
const mutex = new Mutex();
let data: any = {
url: `https://gist.githubusercontent.com/Matthew-Smith/c7f35894ccbdd7dca587a276606e2639/raw/00c4c9d8601bd261be06f489a1548bbf4fc8316e/deno_async_fetch_test.json`
};
const getDataStore = async () => {
const mutexId = await mutex.acquire();
return { mutexId, dataStore: data };
};
const setDataStore = ({
mutexId,
dataStore
}: {
mutexId: string;
dataStore: any;
}) => {
data = dataStore;
mutex.release(mutexId);
};
let order = 0;
const first = async () => {
assertEquals(order, 0);
order++;
const data = await getDataStore();
assert(!data.dataStore.fetchedData); // assert that the doesn't exist
const result = await fetch(data.dataStore.url);
// assert that this next bit happens after the `second` function requests the data store
assertEquals(order, 2);
order++;
setDataStore({
mutexId: data.mutexId,
dataStore: {
...data.dataStore,
fetchedData: await result.json()
}
});
};
const second = async () => {
assertEquals(order, 1); //
order++;
const data = await getDataStore();
// assert that this next bit happens after the `first` function releases the data store
assertEquals(order, 3);
order++;
assert(!!data.dataStore.fetchedData); // assert that the data was fetched
setDataStore(data);
};
first();
second();
});
|
STACK_EDU
|
First, I flashed my NANDmin.bin into my emuNAND so that I have a 9.2 firmware in emuNAND whilst a 11.x in sysNAND. I did not delete the emuNAND after installing a9lh. I am about to region change my emuNAND (from US 9.2 to JP 9.2) by following this guide: https://github.com/Plailect/Guide/wiki/Region-Changing. Of course with common sense, I should do everything with EmuNAND, rather than messing up with the present SysNAND. What I am not sure is whether I could get access to e-shop in light of the following circumstances:- 1. I remember that my US N3DS console (be it sysnand or emunand) has connected to internet before, but I cannot recall whether I created a NNID for my previous sysNAND (i.e. the present emuNAND) (probably not). What I am sure is that I created a sysNAND in my previous emuNAND (i.e. present sysNAND). Here, the Guide provides that: "After this process, only Old 3DSs and New 3DSs which have never accessed the eShop before will be able to access the eShop after creating a new NNID on their new region. Region changed New 3DSs that have already accessed the eShop on their original region cannot create a new NNID and access the eShop on their new region!" So, does it mean that I could never gain access to esShop even I have a valid JP Secureinfo_A from another JP console simply because my console has connected to internet before? 2. I had a JP console (A9lh installed of course) which I rarely play but sometimes I would mess around with it on eshop (so I am not prepared to abandon my JP console completely) . I created NNID on this JP console. What I am prepared to do is to extract the Secureinfo_A from the JP console and to inject it into the emuNAND of my US console. On the other hands, I may still constantly use my JP console and do things online on eshop. My question is, (a) can I create another NNID with the secureinfo_A obtained from the JP console in my US emuNAND and so to get an access on eshop in my region-changed US emuNAND? (b) would I get ban from sharing the Secureinfo_A with another console? Certainly I would not log on eshop in both US and JP console at the same time. Can this avoid a ban on sharing the Secureinfo_A? Any thought or help would be much appreciated as I am gonna region change my emuNAND first whilst awaiting you guys' replies. Thank you.
|
OPCFW_CODE
|
In May, we will do an evening with mixed topics and multiple different lightning talks. 10 up to 20 minutes each.
• The State of CSS (by Christoph Reinartz)
• Learning how to code: Where to start and what NOT to do (by Denise Schmidt)
• DevOps story; two years of adopting the mindset (by Busra Koken)
• Why you should make side projects (by Michael Lee)
• Into the Abyss - The mess that is Wordpress and how to make the best of it (by Phillip Richdale)
The State of CSS (by Christoph Reinartz)
CSS-in-JS, CSS modules, Styled components, CSS-outside-of-JS, BEM, Atomic CSS, Utility CSS, CSS Grid, CSS Custom Properties. Bingo!
CSS has never been as exciting as these days. The talk will provide an in-depth overview on CSS in 2018.
Learning how to code: Where to start and what NOT to do (by Denise Schmidt)
Learning how to code has become accessible to everyone these days, online tutorials promising you to become an expert in a short amount of time. I will talk to you about my learning experience as a female in the tech world, which first steps to take and how to stay motivated.
Denise Schmidt, 25 years old, a self-taught web developer.
She started learning how to code about 5 months ago. Her background is in Psychology and her interest in coding stems from a desire to become a User Experience Expert.
DevOps story; two years of adopting the mindset (by Busra Koken)
As a beginner, finding your place and grasping the idea of DevOps is very much depending on the personal motivation and the environment you work. School don't get you ready for it.
In this talk, Büsra is going to share her journey. You will see the representation of a story that adapts the DevOps as a mindset rather than people, tools and so on. The talk will be a story-line that is supported by real-world examples from the experiences of a Junior Engineer’s road-map to DevOps.
Busra is an enthusiastic Software Engineer at trivago in the team called Software Operations where she is in the heart of DevOps. Before that, she worked at Ericsson as a Cloud System Developer. She loves automating things, solving problems and learning every day.
Why you should make side projects (by Michael Lee)
Making side projects can be fun and there are some positive benefits that can result in making them.
Michael is a husband, dad, eater of pizza, developer and designer. Michael works for CloudBees where he’s an interaction designer. In his very little free time, he works on side projects to delight folks.
Into the Abyss - The mess that is Wordpress and how to make the best of it (by Phillip Richdale)
Phillip is a consultant and a software architect. He is a classic computer-kid of the 80ies and grew in parallel with the micro-computer revolution. For 33 years he has been programming and planning software and IT projects – since the turn of the millennium as his main occupation.
• 18:30 - 19:00: Arrival, get a drink, pizza and socialize
• 19:10 - ~21:25: Talks
• 21:25 - Open End: Socialising
|
OPCFW_CODE
|
After first testing and seemingly okay this is now failing to show the Thunderbird Window until clicked in the taskbar,
thund = "D:\Program Files (x86)\Mozilla Thunderbird\thunderbird.exe " & _
"-compose " & """" & _
"to='" & email & "'," & _...
Hi, I have 3 listboxes on my form and Form Load code is
Private Sub Form_Load()
Dim i As Integer
For i = 1 To 100
List4.RowSource = "SELECT DISTINCT tblMain.Year FROM tblMain WHERE (((tblMain.Year)>='" & Me.OpenArgs...
I want to open a Form and pass a property to it.
However there's code in the Form_Load event that needs that property.
So I have renamed Form_Load event "Activate" and use this
Form_MyForm.MyProperty = "Test"
It works as hoped...
Just wondering how best this might be done.
Controls on my Form are bound to a query. When (if) they're updated I want to show any changes before they're applied.
As I understand it, the data is only updated when the Form is closed, or moved to a new Record, or Me.Dirty= False is applied.
I'm a bit confused... My Form is bound to a table e.g. RecordSource = "Select * From tblExport"
If the value of a control is changed, the change doesn't immediately appear in tblExport until the Form is closed.
But what if you want it to ? Is this where .Dirty is set ? if yes does it belong...
I'm passing a string variable byRef into a Public Sub.
Its values changes in the subroutine and can read at Exit Sub
However when it returns to the code that called it, the variable is an empty string.
Just in case I had this wrong I tried byVal but it did the same thing.
This isn't right is...
Is there some trick to this? I'm setting it in Properties but it's ignored.
Help tells me To use the BackColor property, the BackStyle property, if available, must be set to Normal. but there's no Normal for Backstyle.
Am i missing something? I'm using Access 2019
Hi, been Googling for a soilution but the answrs aren't very clear. Some say you don't need code, others there's a free trial ??
I'd like to launch this frmm a command button annd send a table
Looks like you use DoCmd.SendObject but exmple shows
DoCmd.SendObject acSendTable, "Employees"...
I wish to create a file from a record in my table. This is copied to a folder with ftp access where a cohort can get it and update his table.
I guess there's many ways to do this, but with my knowledge I've created the following. (With the help of...
I thought I had Dates in Access figured out but this has floored me!
I understood dates in SQL have to be #MM/DD/YYYY# (and single M or D is also okay).
This function swaps Day and Month to suit SQL and has worked fine until a Date of 17 Oct.
Private Function USDate(d) As Date
Occurs when I add a Val criteria
This is OK
SELECT Val([1stWeek]) AS Expr1
WHERE (((tblMain.[Date Entered])=#11/10/1990#));
but this reports an error
SELECT Val([1stWeek]) AS Expr1
WHERE (((Val([1stWeek]))=46) AND ((tblMain.[Date Entered])=#11/10/1990#));
I read in the MS documentation The Caption property is a string expression that can contain up to 2,048 characters. Captions for forms and reports that are too long to display in the title bar are truncated.
Why do they talk nonsense? My Form caption is not too long, but is truncated.
I've found if an Option group is setup by the wizard, I cannot set Top, Left etc at runtime.
What seems to happen is the Frame moves but not the controls inside it.
Is there any way to position the Frame WITH option buttons & labels on a Form or must everything be set individually ?
There's 14 option buttons on my Form, spread over 5 Option Group controls.
Only one group will be Visible at any one time.
Can a procedure detect which of the 14 has been clicked, or will I need a click event for each button ?
I don't think it's possible ? Source is a value list. I tried adding spaces but they're ignored. Then found
Convert the listbox to combobox
Make the combobox that you converted right alignment
Convert it again to listbox
But that didn't work either.
What instruction would force the Record 1 of xx countedr to populate as the Form loads?
This used to happen automatically but now only when any record (except record1) is clicked. Then record1 will as well.
|
OPCFW_CODE
|
This is the last two indicator in my system that need some help to fix it. It's afir_ma_1.01.mq4 It's look like a HMA but it's different model. It's disappear of bar . 2nd indicator is linema.mq4. It's repaint indicator it will back to the origiinal when re time frame or restart program again. Please help me to fix it.
Thank you mladen Always help me. Thank you so much
Drawing buffers of that indicator are shifted (it is a sort of centered indicator - even the original works exactly like that, see here : FIR MA - MQL4 Code Base) that is why you have that empty space at the end of the indicator. That is how the author (qpwr) of that indicator made it in the first place
See some more info about it here : https://www.mql5.com/en/forum/172923/page12
As of LineMA : it would need to be rewritten completely (a lot of things done completey wrong in that indicator). This is just a "dirty fix" to make it avoid what you ddescribe in your post
can you add perameter for change color to them?
Here you go
Mighty Thank You for the pointer, mladen
multi time frames are good to use H4 and D1
Hi mladen/mr. tools
I have been having to modify the standard HA indicator parameters to accommodate a RandyCandles view
for years now. The standard HA indicator constantly 'reverts' to original color settings every time the tf is changed.
The is on the color choice for 01 and 02.
Instead of the coding for 01 - Red and 02 - White...
It would be 01 - None and 02 - None.
The would eliminate the HA wicks from appearing... just allowing the HA bodies to appear.
What is needed is an embedded change from Red/White to 'None'
option to choose None.
Can this be coded to avoid having to do it manually each time?
In order to go from this:
I then need to change this:
to get this (RandyCandles)
Well... thanks for any mods you can make to the standard HA indicator...
and sorry... I can't seem to upload the HA.mq4 file at the moment... the site
seems to be a little 'off' at the moment... making it difficult to even post the above.
Thanks for any help with this.
Hello Golden Equity,
Might try this one out, it just shows the standard Ha candle body, its also mtf with alerts on candle color change.
That's puuuurfect and thank you!...
|
OPCFW_CODE
|
Prestantiousnovel Jksmanga – Chapter 1229 The Dalki or Military husky defective to you-p2
Novel–My Vampire System–My Vampire System
the man who fell to earth summary
Chapter 1229 The Dalki or Military future wooden
“You keep mentioning this Arthur. Is he anyone we have to know?” Nathan couldn’t aid but request.
what is the story of cleopatra
Nathan had trouble to realize.
the clockmakers daughter synopsis
‘I didn’t wish to do this, however i ought to risk she won’t harm her own folks.’ Utilizing those as individual meats s.h.i.+elds still left a bitter taste in Nathan’s oral cavity, nonetheless it was distinct that Ruby wouldn’t just hear reason. Nor could they try to escape or battle this. ‘I just expect she hasn’t already experienced the motions of activating the competency nevertheless.’
It didn’t take very long for the well trained group of people to overpower the group of standard civilians coming from the Shelter. Within just seconds they were disarmed, pinned to the floor, owning encountered alongside no issues whatsoever. The troopers then turned, while Nathan also moved him self, so that they were to facial area Ruby as well as asking for Demon tier tool.
It didn’t require much time to get a well trained group of people to overcome the audience of standard civilians from your Protection. Within secs these people were disarmed, pinned to the ground, owning presented close to no difficulty by any means. The troopers then made, whilst Nathan also transferred themself, so they really would deal with Ruby and also the asking Demon tier weapon.
“You continue referencing this Arthur. Is he anyone we should know?” Nathan couldn’t aid but request.
When looking on the stream it made for a strange sight. Regardless of its weight it had been floating towards the top. From the word of advice, the stream was freezing, while the other 50 % of the stream continued to flow.
One of the humans’ terrific man treasures being the end of him had not been a thing he experienced predicted.
life eternal myst
Viewing what Nathan had performed, Ruby of course didn’t would like to injured those from her fellow Shelter. The fact is that, fantastic power was already sweeping with the weapon along with the female noticed like she was unable to switch from her location. She attempted to raise the weapon absent, however it was caught up set up just like her palms had been freezing on top of that.
Section 1229 The Dalki or Armed forces
Right then, the energetic proficiency triggered. For a moment it looked almost like the full river illuminated up, but just a few seconds later the larger system water froze through. A few of the military services team members appeared over the stream to see just how far it experienced freezing in excess of, but it really went even more than their eye-sight could see.
When looking during the river it intended for an unusual sight. Regardless of the weight it was subsequently hovering at the top. In the word of advice, the stream was frozen, while other 1 / 2 of the stream persisted to circulate.
A vortex of an ice pack was swirling across the Demon tier sword. The mobility started gradual at the beginning, but before long started off to pick up rate as more an ice pack produced across the blade. Nathan realized what was going to occur in the near future once they managed almost nothing. His cardiovascular system was whipping, fearful of their demise when the sword’s busy skill might be fired off at them.
Gradually he achieved the sword and grabbed it by it’s hilt. He could experience a frightening vitality dwelling inside, creating him question how Ruby acquired even been able to utilize it. Gradually he came back to the ground, his apparel should have been drenched in drinking water nevertheless the bubble appeared to also safeguard him from that. He now acquired the largest laugh on his confront.
“It had been tough to are convinced, nevertheless they maintained their long distance and so they did guard us from beasts and so on. Gradually your military came and… properly, maybe you have in mind the sleep yourselves.”
My Vampire System
‘That d.a.m.n active ability generates a sizeable tunnel of an ice pack and the smallest impression can make us develop into ice sculptures!’ Nathan did start to panic as his ability could do absolutely nothing against that. The Actual quickly created a sign regarding his palms, a sign for his subordinates to go in the bubble he had made.
It didn’t require much time for your properly trained group to overcome the audience of standard civilians from the Protection. Within just a few seconds these folks were disarmed, pinned to the floor, owning confronted close to no difficulties whatsoever. The soldiers then converted, though Nathan also moved themself, in order that they would confront Ruby and also the billing Demon tier weapon.
Chapter 1229 The Dalki or Army
“The individual that kept our lives The individual who emerged right here as soon as the armed forces possessed still left this Shelter backside whenever it had been a red portal earth. He was the one that showed us how to deal with lower back, helped us increase our neighborhood and then we could actually survive pleased day-to-day lives without nurturing about those outdoors. Then, he just emerged rear 1 day, only with the Dalki as business.”
It didn’t require much time for the properly trained crew to overpower the audience of regular civilians in the Protection. In a matter of seconds these were disarmed, pinned to the ground, getting faced beside no issues at all. The soldiers then made, whilst Nathan also relocated him self, in order that they were to experience Ruby and the asking for Demon level weapon.
It didn’t take very long to get a properly trained crew to overcome the group of standard civilians in the Shelter. Inside of mere seconds these were disarmed, pinned to the ground, obtaining faced next to no difficulty in any respect. The troopers then made, whilst Nathan also relocated themselves, so they were to face Ruby along with the billing Demon tier tool.
Seeing and hearing the title ‘Arthur’ Nathan was aiming to remember if he got been aware of somebody of value with this title, but there was no one who stumbled on head.
In lieu of apprehending Ruby, who was now on the ground trembling, Nathan moved to see if there were nearly anything they are able to do regarding the Demon tier tool.
Nathan battled to recognize.
Ruby clenched her fist and checked perfect past Nathan towards the individual who acquired just talked.
My Vampire System
Ruby clenched her fist and searched proper past Nathan towards the one that possessed just talked.
“I realize, I actually have a hard familiarity with your circumstance.” Nathan claimed, getting across a good warmer take in in a cup, positioning it in her own hands and fingers, The rest of the armed service party have been doing the identical. Expecting to not ever deal with them as adversaries, obtaining these phones actually feel convenient as well as to converse additional openly.
‘I didn’t need to do this, having said that i have got to gamble that she won’t harm her people.’ Using individuals as individual meat s.h.i.+elds still left a nasty style in Nathan’s lips, but it was obvious that Ruby wouldn’t just listen to cause. Nor could they run away or combat this. ‘I just expect she hasn’t already been through the motions of triggering the ability yet still.’
“Just, they’re a similar armed forces who abandoned us within the initial indicator there seemed to be trouble! Remember who protected us from those beasts? It turned out Arthur! The armed service were those who possessed chosen to assault us!”
Leave a Reply
|
OPCFW_CODE
|
WIP: Extract measurement date and age for NIRX files
What does this implement/fix?
Extract measurement date and birthday for NIRX files.
Additional information
The vendor software only allows saving of age as an integer. So I have taken the measurement date, subtracted the age integer from the years, and stored that as the birthday. Open to alternative suggestions.
@larsoner this is failing due to processing of French date strings. Is there any inbuilt multilingual date string parsers in MNE already or do I need to write one?
HHmmmm my changes also seem to break other things, Ill fix that. But still curious about the best way to process French dates.
what do you mean French dates? like month with letter with accents?
Like Travis says:
ValueError: time data '"jeu. 13 févr. 2020""09:08:47.511"' does not match format '"%a, %b %d, %Y""%H:%M:%S.%f"'
is it fixable by setting the LOCALE properly?
The docs suggest that locale is used, so indeed if the locale can be extracted from the file header, then we can make a @contextlib.contextmanager to orig = locale.set_locale(locale.LC_something, whatever) and then set_locale(locale.LC_something, orig) afterward. Even if it isn't embedded in the header (which would be sad), we could start making a list of ones to iterate over and try, and always add others later if need be. (Or maybe it won't be so bad to iterate over all of them, if we can get a list?)
Thanks for the helpful suggestions. The locale is not embedded in the header. I had started writing a list of locales to iterate over, that's when I stopped and decided to ask here before progressing too far. So it sounds like there is no solution in MNE for this already, so I will continue to code along this path.
Indeed AFAIK we have never had to deal with locale when parsing datetime. We can try adding some other LC_ env var to some Travis or Azure run to see if we can get some existing datetime stuff to break. Let's do that after you fix the reading of this particular file, though
-- it will probably tell us which env var(s) are relevant.
here is an example that works:
import locale
from datetime import datetime
locale.setlocale(locale.LC_TIME, "fr_FR")
s = "jeu. 13 fév. 2020""09:08:47.511"
f = "%a. %d %b. %Y""%H:%M:%S.%f"
datetime.strptime(s, f)
however I had to use fév. and févr. which seems to be the standard.
however I had to use fév. and févr. which seems to be the standard.
Thanks for the code snippet. I came up with something very similar and ran in to the same problem. It's the févr that is the problem, datetime.strptime seems to expect fev. I considered some sort of replace approach for different languages, but this seems unwieldily (how many languages to I then support?). I also considered other packages that are made for dealing with dates, but don't want to add more requirements.
Next Im going to check where this févr actually comes from. This may be a problem of our own creating, I have never seen a file from France and don't know if the device will actually save as fev or févr, this seems to have been generated somewhere in _test_raw_reader.
I see with company that produces this file how the come up with févr while
it appears the standard across languages is fév
I would use try except here to avoid failing to load such files but I
agree the mne code cannot fix all IO corner cases.
can you see with them how it happens? and maybe how they deal with this
internally?
@rob-luke will you have time to revisit this in the next couple of weeks? I'll optimistically mark for 0.21
Yes I would like to cross this off. The milestone has sept 15 marked, that should be achievable
@larsoner I am not sure what to do here with the non standard French encoding of dates.
It seems to me that fév is what datetime expects, yet we have 'févr' in the tests because that was provided in #7313 (I assume, I don't actually have the file).
So if NIRX is providing non standard french date formats, and if I know all the non standard naming, I could create a translation function. But I don't know all the non standard naming and don't have access to a machine at the moment to determine this. Plus this would become quite unwieldy, what other non standard language formats do we then support.
Would it be acceptable to try and parse the dates in the current locale and throw a warning if it doesn't work. For dates that we can't extract it will set them to 0 or something else? This will get English working and downstream usage of the date, and when someone with access to a French system gets some data they can just tweak the French parsing. Thoughts?
yes I fear we need something custom. But I would reach out to
engineers at the company
that produces these files to report the issue.
I would reach out to engineers at the company that produces these files to report the issue.
Good idea. But I would like to verify the issue myself first if I am to contact them. Ideally someone could send me some files recorded in France. Failing that, I could set the local of our lab pc to France and record my own file, but Im currently locked out of the lab (at least for the next 3 weeks). So this won't be done by 0.21 release date. I suggest I create an English date PR for 0.21 and raise an issue for the non English dates to be solved when someone sends me some files or I can get in the lab. Acceptable? Or do you prefer I wait and do everything at once?
acceptable
On Mon, Sep 7, 2020 at 10:35 AM Robert Luke<EMAIL_ADDRESS>wrote:
I would reach out to engineers at the company that produces these files to
report the issue.
Good idea. But I would like to verify the issue myself first if I am to
contact them. Ideally someone could send me some files recorded in France.
Failing that, I could set the local of our lab pc to France and record my
own file, but Im currently locked out of the lab (at least for the next 3
weeks). So this won't be done by 0.21 release date. I suggest I create an
English date PR for 0.21 and raise an issue for the non English dates to be
solved when someone sends me some files or I can get in the lab.
Acceptable? Or do you prefer I wait and do everything at once?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/mne-tools/mne-python/pull/7891#issuecomment-688163903,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AABHKHHISNMEPIQE53QCN4DSESLOJANCNFSM4N3636UA
.
@agramfort @larsoner can you please review.
Thanks for the review. @agramfort and @larsoner could you please review again.
CI fail seems to be a time out error
thx @rob-luke
|
GITHUB_ARCHIVE
|
How do I add a footer in SSRS?
How do I add a footer in SSRS?
Add a Page footer is SSRS Report In the Report Designer, click the Design tab. On the Report menu, select Add Page Footer. A new design area is added to the Report Designer. Drag a Textbox report item from the Toolbox window to the page footer area.
What is the difference between Report footer and page footer?
Page headers and footers are not the same as report headers and footers. Reports do not have a special report header or report footer area. A report footer consists of report items that are placed at the bottom of the report body. They appear only once as the last content in the report.
What is page footer in database application?
In desktop publishing applications, the footer identifies the space at the bottom of a page displayed on a computer or other device. The footer is sometimes duplicated over all of the pages in the document, with the page number increasing accordingly.
How do I add a footer in SQL?
Right-click the page footer, and then click Footer Properties to add borders, background images, or colors, or to adjust the width of the footer. Then click OK….Point to Insert, and then click one of the following items to add it to the header or footer area:
What is a report footer?
The Report Footer is the bottom section of a report. It may contain the page number, execution date and time, a confidentiality notice, and so on.
How do I display the number of records in SSRS report?
First, add a Row Group which is a Parent of the existing top level group. In the Group By expression, enter =CEILING(RowNumber(Nothing)/50) where 50 is the number of records to be displayed per page. Be sure to leave the group header & footer boxes unchecked.
How can we limit number of records on each page in SSRS?
Once you click on Parent Group, it will open a Tablix Group. In the Group By expression, enter =CEILING(RowNumber(Nothing)/25) where 25 is the number of records to be displayed per page. If you want to display 50 records then choose 50.
What is the footer on word?
A header is the top margin of each page, and a footer is the bottom margin of each page. Headers and footers are useful for including material that you want to appear on every page of a document such as your name, the title of the document, or page numbers.
What is footer example?
Some examples include Calendar, Archives, Categories, Recent Posts, Recent Comments… and the list continues. Below is an example of footer with widgets included: Description.
Where is footer in Word?
Go to Insert > Header or Footer. Choose from a list of standard headers or footers, go to the list of Header or Footer options, and select the header or footer that you want. Or, create your own header or footer by selecting Edit Header or Edit Footer. When you’re done, select Close Header and Footer or press Esc.
Where is the report footer located in the report?
Decide which data to put in each report section
|Report header section||Appears only once, at the top of the first page of the report.|
|Report footer section||Appears after the last line of data, above the Page Footer section on the last page of the report.|
How to add page footer in SSRS report?
To add the SSRS Report footer or SSRS page Footer, right-click on the empty space around the table report in the report designer and select the Insert -> Page Footer option. Now you can see the Page Footer in SSRS Report.
How do I add a page number field to a report?
In the Report Data pane, expand the Built-in Fields folder. If you don’t see the Report Data pane, on the View tab, check Report Data. Drag the Page Number field from the Report Data pane to the report header or footer. The page footer is added to the report automatically.
How are pagepage headers and footers rendered in word?
Page headers and footers are rendered as header and footer regions in Word. If a report page number or an expression that indicates the total number of report pages appears in the page header or footer, they are translated to a Word field so that the accurate page number is displayed in the rendered report.
How do I show the total number of pages in footer?
For a page number, you may want to add the word “Page” before the number. You may also want to show the total number of pages. Adding the total number of pages to the footer may slow performance when you run or preview your report.
|
OPCFW_CODE
|
You are viewing documentation for Kubernetes version: v1.28
Kubernetes v1.28 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date information, see the latest version.
High Performance Networking with EC2 Virtual Private Clouds
One of the most popular platforms for running Kubernetes is Amazon Web Services’ Elastic Compute Cloud (AWS EC2). With more than a decade of experience delivering IaaS, and expanding over time to include a rich set of services with easy to consume APIs, EC2 has captured developer mindshare and loyalty worldwide.
When it comes to networking, however, EC2 has some limits that hinder performance and make deploying Kubernetes clusters to production unnecessarily complex. The preview release of Romana v2.0, a network and security automation solution for Cloud Native applications, includes features that address some well known network issues when running Kubernetes in EC2.
Traditional VPC Networking Performance Roadblocks
A Kubernetes pod network is separate from an Amazon Virtual Private Cloud (VPC) instance network; consequently, off-instance pod traffic needs a route to the destination pods. Fortunately, VPCs support setting these routes. When building a cluster network with the kubenet plugin, whenever new nodes are added, the AWS cloud provider will automatically add a VPC route to the pods running on that node.
Using kubenet to set routes provides native VPC network performance and visibility. However, since kubenet does not support more advanced network functions like network policy for pod traffic isolation, many users choose to run a Container Network Interface (CNI) provider on the back end.
Before Romana v2.0, all CNI network providers required an overlay when used across Availability Zones (AZs), leaving CNI users who want to deploy HA clusters unable to get the performance of native VPC networking.
Even users who don’t need advanced networking encounter restriction, since the VPC route tables support a maximum of 50 entries, which limits the size of a cluster to 50 nodes (or less, if some VPC routes are needed for other purposes). Until Romana v2.0, users also needed to run an overlay network to get around this limit.
Whether you were interested in advanced networking for traffic isolation or running large production HA clusters (or both), you were unable to get the performance and visibility of native VPC networking.
Kubernetes on Multi-Segment Networks
The way to avoid running out of VPC routes is to use them sparingly by making them forward pod traffic for multiple instances. From a networking perspective, what that means is that the VPC route needs to forward to a router, which can then forward traffic on to the final destination instance.
Romana is a CNI network provider that configures routes on the host to forward pod network traffic without an overlay. Since inter-node routes are installed on hosts, no VPC routes are necessary at all. However, when the VPC is split into subnets for an HA deployment across zones, VPC routes are necessary.
Fortunately, inter-node routes on hosts allows them to act as a network router and forward traffic inbound from another zone just as it would for traffic from local pods. This makes any Kubernetes node configured by Romana able to accept inbound pod traffic from other zones and forward it to the proper destination node on the subnet.
Because of this local routing function, top-level routes to pods on other instances on the subnet can be aggregated, collapsing the total number of routes necessary to as few as one per subnet. To avoid using a single instance to forward all traffic, more routes can be used to spread traffic across multiple instances, up to the maximum number of available routes (i.e. equivalent to kubenet).
The net result is that you can now build clusters of any size across AZs without an overlay. Romana clusters also support network policies for better security through network isolation.
Making it All Work
While the combination of aggregated routes and node forwarding on a subnet eliminates overlays and avoids the VPC 50 route limitation, it imposes certain requirements on the CNI provider. For example, hosts should be configured with inter-node routes only to other nodes in the same zone on the local subnet. Traffic to all other hosts must use the default route off host, then use the (aggregated) VPC route to forward traffic out of the zone. Also: when adding a new host, in order to maintain aggregated VPC routes, the CNI plugin needs to use IP addresses for pods that are reachable on the new host.
The latest release of Romana also addresses questions about how VPC routes are installed; what happens when a node that is forwarding traffic fails; how forwarding node failures are detected; and how routes get updated and the cluster recovers.
Romana v2.0 includes a new AWS route configuration function to set VPC routes. This is part of a new set of network advertising features that automate route configuration in L3 networks. Romana v2.0 includes topology-aware IP address management (IPAM) that enables VPC route aggregation to stay within the 50 route limit as described here, as well as new health checks to update VPC routes when a routing instance fails. For smaller clusters, Romana configures VPC routes as kubenet does, with a route to each instance, taking advantage of every available VPC route.
Native VPC Networking Everywhere
When using Romana v2.0, native VPC networking is now available for clusters of any size, with or without network policies and for HA production deployment split across multiple zones.
-- Juergen Brendel and Chris Marino, co-founders of Pani Networks, sponsor of the Romana project
|
OPCFW_CODE
|
The most important thing you need to develop an OpenCL application is to be able to compile and run your code. If that is what you need to know, then you’re in the right place! Unlike the CUDA development platform, OpenCL is an open standard and is supported on various devices. Anything from multi-core CPUs to integrated GPUs, to dedicated GPUs, and even some more exotic devices like DSPs and FPGAs. Because of this diversity, the development environment is a bit fragmented. There are OpenCL SDKs available from various vendors including Intel, AMD, and NVIDIA. What to do!
Grab the Intel SDK
It doesn’t matter what platform you’re developing on. Go ahead and grab the Intel SDK. It can be downloaded here: http://software.intel.com/en-us/vcsource/tools/opencl-sdk-2013. In case that link ever dies, you can always search for ‘download Intel OpenCL SDK’ to find the latest release. When installing, the SDK will integrate itself into visual studio, assuming you’re using Windows as your development platform. One thing to note is that the first few times I tried installing it, the installation was unsuccessful. I was only able to successfully install it when choosing to integrate it with Visual Studio 2010 pro only, and not Visual Studio 2012 express. Not sure if that’s a common problem or not, but just wanted to mention it in case you have any trouble getting the installation to succeed.
Why the Intel SDK
There are several reasons I chose the Intel SDK. The SDK is mature, and supports OpenCL 1.2 at the time of this writing. Not only that, but if you and you’re customers are interested in high performance computing, it is frankly unlikely that they’re using an AMD CPU. Also, code created with the Intel SDK tends to perform find on all platforms, whereas AMD’s SDK tends to only perform well for AMD products.
Creating your first program
Now that the SDK is installed, you’ll want to crack open Visual Studio. Go to File->New Project. You should see an entry, Visual C++ -> OpenCL. Select this option and create your project. You can uncheck ‘create empty project’ so it’ll at least add a file for you. Now, before you compile, let’s get some super-simple OpenCL code. Again, I’m going to defer to Intel’s example code. http://software.intel.com/en-us/articles/intel-sdk-for-opencl-applications-xe-samples-getting-started. Once you download this, open the CapsBasic folder, and open the CapsBasic.cpp file. Copy and replace the contents with the source file in your project with the contents of CapsBasic.cpp. Then simply compile and run your code. It’s just that easy!
Taking a look at what’s happening
CapsBasic is really just enumerating the OpenCL platforms on your device, selecting one, and then enumerating all the OpenCL devices on the selected platform. If that doesn’t make any sense, don’t worry, we’ll get into that more in later articles. For now, you just need to know that to get details of a specific device in your computer, change the line in the source code that sets the required_platform_subname variable. For example, you can change that to “NVIDIA” and it’ll select an NVIDIA device in your computer and print out the OpenCL capabilities of that device. That’s it for now. You should now be able to easily compile and run OpenCL code on your computer!
|
OPCFW_CODE
|
Suggesting counterplans for Dev Protocol to be compatible with Layer2.
Although we’re sure that Dev Protocol will be compatible with Layer2, we’d like to hear your opinions on the following points.
[ To which Layer2 Dev Protocol should be compatible with?]
- We’re considering zkSync or Optimism as a candidate.
- We’re not considering the combination use of plural L2 solutions.
[How Dev Protocol can be compatible with Layer2?]
- We’ll use L2 as main net instead of adopting the combination use of L1 and L2.
- After the transition to L2, the maintenance for L1 protocols will be suspended.
Currently, Ethereum has an issue on scalability, which indicates its low processing performance.
Since Ethereum has a small number of transactions to process per second, people have to pay more commission due to the network congestion caused by the increase of operating applications on Ethereum. Layer2 would be one of the solutions for it.
Briefly speaking, Layer2 is a technology to improve transaction process on Ethereum and to reduce the commission.
If Dev Protocol is compatible with Layer2, the issue of price jump in various transactions would be solved.
The following table shows examples of Layer2:
|Name||Rollups||Smart contract||Contract Language||Supporting Tool|
|Optimism||Optimistic Rollups||✓||Solidity||Truffle, Waffle, HardHat|
Rollup is one of the scaling technology used on Ethereum. After executing transactions outside L1 (Ethereum), fast and low cost transactions can be realized by submitting organized transaction data or evidence to L1 as well as verifying at L1, while L1 security is maintained at the same time.
Rollups have ZK-Rollups that verify the validity of transactions by utilizing zero knowledge proof as well as Optimistic Rollups, which functions based on the premise that valid transactions are executed (unauthorized transactions are not executed), without necessarily bringing all the transaction data to L1.
ZK-Rollups have an advantage in taking less time for the completion of transactions compared to Optimistic Rollups, although the difficulty in the implementation of smart contract is one of ZK-Rollups’ shortcomings.
One of the merits of zkSync is its fast withdrawal of token from L2. Optimistic has an advantage in using solidify, assets so far, as well as knowledge gained from Truffle and Waffle. Keeping the transition cost low is another strong point of Optimistic.
Since character strings cannot be used freely in the current version of Zinc, a contract language for zkSync, there would be some technical barriers at the time of creating Property Token. Therefore I think Optimistic is a better choice.
Reference to L1’s storage data and smart contract methods cannot be executed with L2 in default. Strictly speaking, they can be done with Optimistic, however, in that case, there would be major limitations e.g. (1) the gas fee costs much, (2)advantages of using L2 are minimized, (3) it takes 7 days to reflect the results.
All things considered, we’re thinking about newly creating Dev Protocol at L2, and supporting users’ migration of tokens such as DEV tokens as well as Property tokens. Since the storage information of L1 is reset when moving to L2, staking should be executed again at L2 after migrating DEV tokens and Property tokens from L1. In other words, we would create another Dev Protocol with cheap gas fee.
If you have any comments on it, please let us know.
N / A
|
OPCFW_CODE
|
Definition of "edited" for triggering community-wiki state
Today I encountered an interesting situation. I tend to write with a lot of typos and sometimes use awkward phrasing, which I later edit.
After rewriting an older answer (too bring it up-to-date with my current understanding of subject), I made a lot of mistakes. This resulted in at least 5 different subsequent edits, and triggered "community wiki" mode.
So this is the question-like suggestion, that I got ..
Shouldn't there be a separation between "large edit" and "fixing typos", based on percentage of content, that was changed?
This way, the fixing "a" to "as" would not count as a full edit, and would avoid triggering wiki-mode prematurely.
CW should not occur before 10 edits by the original author; see "How does a post become a Community Wiki post?" in What are “Community Wiki” posts?.
@Arjan , the 5 mentioned edits where fixing typos, bringing the total to/over 10.
Related: Stop auto-cw for self-edited posts and Could we have the ability to mark a change as minor in questions or answers? (declined).
@TimPost , if OP makes major changes to the question, it should be marked for a Review. Such behavior would be quite bad for SO, as it generates answers, which make no sense in context of "current" question. I feel like ability to distinguish between minor and major edits could have a lot of positive use-cases. And ability to stop people from gaming system for "bumps" would be a nice addition.
This no longer applies.
I only encountered that on answers I really spent the love. E.g. coming back after some time and so on. And community-wiki does not always apply then.
What you can do in these cases is flag your own answer and ask a moderator to restore to non-community-wiki. This normally works without any problems.
I don't think such things can be reasonable managed automatically or let's say: they don't need to IMHO.
I tend to do a lot of typo and semantic corrections, too, since I'm a bit of a perfectionist. Luckily, there's a five-minute grace period for edits.
If your post goes over 10, you should probably ask a moderator to unwikify it. I'm not sure if you can delete your own question once it becomes a wiki, but if you can, that may be an option for you, too.
Meanwhile, you may also want to consider writing your questions/answers off-line, and giving yourself a cooling-off period before pasting it into SO. I do that myself when I have a really long or complex post, and find that it cuts down on the revisions a lot.
And what if you write you answer late in the evening, post it , an in the morning decide the re-check it ? It would be useful to have ability to create linkable Draft versions. You could even drop it in chat and ask some people with more clue to check it out. Though creating draft system would be a quite large addition to the SO infrastructure .. and, I fear, quite abusable.
I guess the bottom line is simple: don't improve your answers. You get punished for it. Post in time for the OP to see it, make it high quality or earn actual rep for it. Pick any two...
|
STACK_EXCHANGE
|
It is crucial that we do not assign functionality to an HTTP method that supersedes the specification-defined boundaries of that method. For example, an HTTP GET on a particular resource should be read-only. It should not change the state of the resource it is invoking on.
Intermediate services like a proxy-cache, a CDN (Akamai), or your browser rely on you to follow the semantics of HTTP strictly so that they can perform built-in tasks like caching effectively.
If you do not follow the definition of each HTTP method strictly, clients and administration tools cannot make assumptions about your services, and your system becomes more complex.
Let’s walk through each method of our object model to determine which URIs and HTTP methods are used to represent them. For better understanding we will consider eCommerce Example. Hence there will be multiple Orders, Products and Customers available in the system.
Browsing All Orders, Customers, or Products
The Order, Customer, and Product objects in our object model are all very similar in how they are accessed and manipulated. One thing our remote clients will want to do is to browse all the Orders, Customers, or Products in the system. These URIs represent these objects as a group:
To get a list of Orders, Products, or Customers, the remote client will call an HTTP GET on the URI of the object group it is interested in. An example request would look like the following:
Our service will respond with a data format that represents all Orders, Products, or Customers within our system. Here’s what a response would look like:
One problem with this bulk operation is that we may have thousands of Orders, Customers, or Products in our system and we may overload our client and hurt our response times. To mitigate this problem, we will allow the client to specify query parameters on the URI to limit the size of the dataset returned:
Here we have defined two query parameters: startIndex and size. The startIndex parameter represents where in our large list of Orders, Products, or Customers we want to start sending objects from. It is a numeric index into the object group being queried. The size parameter specifies how many of those objects in the list we want to return. These parameters will be optional. The client does not have to specify them in its URI when crafting its request to the server.
Obtaining Individual Orders, Customers, or Products
We can use a URI pattern to obtain individual Orders, Customers, or Products:
We will use the HTTP GET method to retrieve individual objects in our system. Each GET invocation will return a data format that represents the object being obtained:
For this request, the client is interested in getting a representation of the Order with an order id of 222. GET requests for Products and Customers would work the same. The HTTP response message would look something like this:
The response code is 200, “OK”, indicating that the request was successful. The Content-Type header specifies the format of our message body as XML, and finally we have the actual representation of the Order.
Creating an Order, Customer, or Product
There are two possible ways in which a client could create an Order, Customer, or Product within our order entry system: by using either the HTTP PUT or POST method. Let’s look at both ways.
Creating with PUT
The HTTP definition of PUT states that it can be used to create or update a resource on the server. To create an Order, Customer, or Product with PUT, the client simply sends a representation of the new object it is creating to the exact URI location that represents the object:
PUT is required by the specification to send a response code of 201, “Created”, if a new resource was created on the server as a result of the request.
The HTTP specification also states that PUT is idempotent. Our PUT is idempotent, because no matter how many times we tell the server to “create” our Order, the same bits are stored at the /orders/222 location. Sometimes a PUT request will fail and the client won’t know if the request was delivered and processed at the server. Idempotency guarantees that it’s OK for the client to retransmit the PUT operation and not worry about any adverse side effects.
The disadvantage of using PUT to create resources is that the client has to provide the unique ID that represents the object it is creating. While it is usually possible for the client to generate this unique ID, most application designers prefer that their servers (usually through their databases) create this ID. In our hypothetical order entry system, we want our server to control the generation of resource IDs. So what do we do? We can switch to using POST instead of PUT.
Creating with POST
Creating an Order, Customer, or Product using the POST method is a little more complex than using PUT. To create an Order, Customer, or Product with POST, the client sends a representation of the new object it is creating to the parent URI of its representation, leaving out the numeric target ID. For example:
Updating an Order, Customer, or Product
We will model updating an Order, Customer, or Product using the HTTP PUT method. The client PUTs a new representation of the object it is updating to the exact URI location that represents the object. For example, let’s say we wanted to change the price of a product from $199.99 to $149.99. Here’s what the request would look like:
As we stated earlier in this article, PUT is great because it is idempotent. No matter how many times we transmit this PUT request, the underlying Product will still have the same final state.
When a resource is updated with PUT, the HTTP specification requires that you send a response code of 200, “OK”, and a response message body or a response code of 204, “No Content”, without any response body. In our system, we will send a status of 204 and no response message.
We could use POST to update an individual Order, but then the client would have to assume the update was non-idempotent and we would have to take duplicate message processing into account.
Removing an Order, Customer, or Product
We will model deleting an Order, Customer, or Product using the HTTP DELETE method. The client simply invokes the DELETE method on the exact URI that represents the object we want to remove. Removing an object will wipe its existence from the system.
When a resource is removed with DELETE, the HTTP specification requires that you send a response code of 200, “OK”, and a response message body or a response code of 204, “No Content” without any response body. In our application, we will send a status of 204 and no response message.
In our system, Orders can be cancelled as well as removed. While removing an object wipes it clean from our databases, cancelling only changes the state of the Order and retains it within the system. How should we model such an operation?
Here, the cancel query parameter would tell our service that we don’t really want to remove the Order, but cancel it. In other words, we are overloading the meaning of DELETE.
While We are not going to tell you not to do this, We will tell you that you shouldn’t do it. It is not good RESTful design. In this case, you are changing the meaning of the uniform interface. Using a query parameter in this way is actually creating a mini-RPC mechanism. HTTP specifically states that DELETE is used to delete a resource from the server, not cancel it.
That’s it for now in assigning HTTP methods in RESTful web services, Keep Learning and Sharing. 🙂
|
OPCFW_CODE
|
Feature - Security Scan
Is your feature request related to a problem? Please describe.
Should Scorecard run a security scan on repositories? With something like https://github.com/coinbase/salus
Here's a question I would ask: Do the raw results of an automated security scan provide a meaningful insight as to the security of a project? I would argue it does not due to the high number of false positives in SAST tools.
IMO gosec provides good security defaults.
The whole idea is to make OSS aware of security implications that most projects aren’t providing. The goal is to encourage them to address these.
Most of these linters have an option nolint, which meant the owner of the repository took their time to address those false positives even if it is.
I've thought about this further, and I think this is worth doing.
there are various shades of grey we may consider here (sorry was OOO yesterday):
Check for specific SAST tools. As simple as looking for bot names run on PR and/or parsing config files (what David said)
Classify what guarantees a SAST tool provides. Example: go-kart tries to detect command Injection, path traversal. I don't think gosec does. But gosec does a good job at checking for bad crypto API (e.g., md5). This is value to understanding what we get from a particular tool. All this is time-consuming and the tools may change over time.
Surfacing SAST results in scorecard. The logic for this cannot live in the scorecard codebase because it requires running 3P tools, would require sandboxing on user's machines, etc. Long-term however, we can build this as a service. This has the advantages of:
3.1 Letting repo users read the results without installation.
3.2 Letting everyone read the results, which is currently not possible in GitHub scans (they are private to maintainers only)
3.3 Result can be surfaced in scorecard via an REST API to the service - similar to what we do for OSV.
One big blocker is false positive, like already mentioned. At this point we need to think about how we handle annotation: language-specific vs language-agnostic? Then, to remove FPs, we could proceed in stages:
Trust the repo owner to add annotation themselves
Have an army of security consultant who can vet the annotation. Scorecard can check who accepted the change.
With a service, we could, long-long-term:
Automatically file issues on the repo
Wait X days for a fix or an annotation
After X days have passed, file an OSV bug which can easily be surfaced in the Vulnerabilities check.
I like what you've put down here. I just wanted to note the things you put under long-long term sounds like the things Project Omega from Michael Scovetta aims to do.
do you have a link to the project?
Yes, https://docs.google.com/document/d/1u7Ps18dzu9M-HF7ZHTK6VB5jLaVJvnw6uq3o7qw5yGE/edit?usp=sharing
also related https://ostif.org/google-is-partnering-with-open-source-technology-improvement-fund-inc-to-sponsor-security-reviews-of-critical-open-source-software/
cc @scovetta this thread looks awfully similar to alpha/omega project, as @chrismcgehee pointed out. Let's try to ingest their data in scorecard when it's ready; or even work with them.
|
GITHUB_ARCHIVE
|
OPENTEXT – THE INFORMATION COMPANY
Together Carbonite-a leader in data protection and Webroot-a leader in data security-form the SMB and Consumer Division of OpenText. The mission of our joint offering is to make cyber resilience simple, reliable and accessible in the connected world. We enable comprehensive data protection for companies, consumers, and our vast network of partners around the globe.
Our business requires top talent. We foster a thriving, dynamic environment rich with inventive minds and entrepreneurial spirit. From engineering to sales and marketing, operations and customer support, our employees are empowered and encouraged to build their careers at OpenText.
We pride ourselves on hiring standout candidates who shine in a workplace that encourages collaboration and teamwork. We are growing fast, and looking for talented candidates around the globe. Are you ready to grow with us?
This is an exciting opportunity for a C# Lead Software Developer to work on a very decoupled, scalable architecture of Web API’s and applications that powers the backbone of the security endpoint based platform. There is a never a dull day and candidates will find themselves context switching on a number of different applications and projects to meet the ever evolving business needs.
As well as C#, candidates will get to work with and learn exciting cloud based technologies using Amazon Web Services, NoSQL databases for ‘Big Data’ and Docker containers for deployment to name a few. We always like to look at what is on the horizon with new technologies and what could be a good fit, so there is always something new to learn.
You are great at:
- Designing, building, and maintaining efficient, reusable, and reliable C# code
- Leading projects in terms of architecture and planning of Development Tasks
- Working side by side with the Software Development Manager in ensuring best practices such as SOLID principles and TDD methods are being followed
- Having a test first mindset on code testing through unit tests
- Ensuring the best possible performance, quality, and responsiveness of applications
- Helping maintain code quality, organization, and automation
- Mentoring of other developers
- Proactively recommending and leading improvements to the Development Lifecycle
What it takes:
- Very strong in C#, with a good knowledge of its ecosystems including RESTful Web API’s and MVC
- 7 + years of commercial software development experience
- Familiarity with the .NET framework (4.5 to 4.8, including .Net Core (up to version 3.1 desirable))
- Strong understanding of object-oriented programming, including SOLID principles (essential)
- Experienced using TDD (essential)
- Familiar with using Agile methodologies, such as SCRUM and Kanban
- Up to date knowledge of OWASP web security risks such as CSRF and XSS
- Understanding fundamental design principles behind a scalable application
- Proficient understanding of code source control versioning tools
- Experience with using cloud-based technologies, e.g. AWS
- Familiar with NoSQL databases (desirable)
- Worked with very high load API’s and applications (desirable)
At OpenText we understand and value diversity in our employees and are proud to be an Equal Opportunity Employer. We hire the best talent regardless of race, creed, color, national origin, ancestry, disability, marital status, sex, age, veteran status or sexual orientation. If you require accommodation at any time during the recruitment process please email [email protected] Applicants have rights under Federal Employment Laws including but not limited to: Family and Medical Leave Act (FLMA) , Equal Employment Opportunity and Employee Polygraph Protection Act
|
OPCFW_CODE
|
Message from discussion Using NewRelic with Play!
Received: by 10.224.195.130 with SMTP id ec2mr4636361qab.29.1311699745582;
Tue, 26 Jul 2011 10:02:25 -0700 (PDT)
Received: by 10.224.174.3 with SMTP id r3ls1279647qaz.1.gmail; Tue, 26 Jul
2011 10:02:22 -0700 (PDT)
Received: by 10.224.174.13 with SMTP id r13mr4607243qaz.17.1311699742554;
Tue, 26 Jul 2011 10:02:22 -0700 (PDT)
Received: by 10.224.176.139 with SMTP id be11msqab;
Tue, 26 Jul 2011 09:41:40 -0700 (PDT)
Received: by 10.224.192.132 with SMTP id dq4mr678038qab.20.1311698500556; Tue,
26 Jul 2011 09:41:40 -0700 (PDT)
Received: by hd10g2000vbb.googlegroups.com with HTTP; Tue, 26 Jul 2011
09:41:40 -0700 (PDT)
Date: Tue, 26 Jul 2011 09:41:40 -0700 (PDT)
X-HTTP-UserAgent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0)
AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.122 Safari/534.30,gzip(gfe)
Subject: Using NewRelic with Play!
From: Ryan Neufeld <r...@gushhq.com>
To: play-framework <firstname.lastname@example.org>
Content-Type: text/plain; charset=ISO-8859-1
I'm attempting to set up NewRelic monitoring for production usage and
I'm having trouble getting the newrelic.jar agent to report any
transactions to the service.
* Drop into a Play! project (with play-scala module, in our case)
* Add the necessary newrelic/newrelic.jar and newrelic/newrelic.yml to
the project - enabling monitoring for development, keys, etc.
* $ play run -javaagent:./newrelic/newrelic.jar
The application loads and runs correctly. Inspecting newrelic/logs/
newrelic_agent.log shows activity, and my NewRelic page shows my
application as a host. I submit some requests to the page and wait the
requisite couple minutes. The agent logs show activity and messages
like: "Jul 26, 2011 11:40:07 AM NewRelic FINE: Reported 30
timeslices", however NewRelic doesn't show any transactions in their
Has anyone successfully used NewRelic as a standalone agent in Play!
|
OPCFW_CODE
|
Assign multiple keyboard shortcuts for one action
I've been wondering how can I assign to multiple keyboard shortcuts to do the same action?
More specifically, I want to be able to change my volume with both my headset buttons and my keyboard.
I'm able to change the volume with just one at the time: with the one I define in the 'Keyboard shortcuts' application.
Is there any way?
Related: https://askubuntu.com/questions/292494/multiple-keyboard-shortcuts-for-same-command
You can assign multiple keyboard shortcuts (keybindings) for the same command using gsettings command line.
One important thing to know is that Ubuntu 18.04 Settings GUI only shows the first keybinding for a command, so if you have multiple keybindings for a command, the others won't appear in Settings. You can use gsettings to all the keybindings.
Let's say I want to add another keybinding for "Switch to Workspace 1". The default for me was Super+Home, but I want to add a second keybinding Ctrl+1.
# list all keybindings
gsettings list-recursively | grep -e org.gnome.desktop.wm.keybindings -e org.gnome.settings-daemon.plugins.media-keys -e org.gnome.settings-daemon.plugins.power | sort
# confirm no other keybinding conflicts
gsettings list-recursively | grep '<Control>1'
# set multiple keybindings for "Switch to Workspace 1"
gsettings set org.gnome.desktop.wm.keybindings switch-to-workspace-1 "['<Super>Home', '<Control>1']"
# confirm value is set correctly
gsettings get org.gnome.desktop.wm.keybindings switch-to-workspace-1
Now you can use either Super+Home or Control+1 to Switch to Workspace 1. Remember, you will only see the first one Super+Home in the Settings GUI, but it will work!
This only works with the window manager hotkeys, not media keys which is what the user was looking for. Unfortunately media-keys only accept strings, not arrays.
Didn't work for me on Ubuntu 18.04.4 LTS with the Unity desktop. Only the first keybinding in the list seems to be recognized.
This method worked (through dconf-editor), I could add second shortcuts for media commands (Arch Linux).
I do just that with Custom Shortcuts:
I use xdotool key --clearmodifiers XF86AudioLowerVolume (and XF86AudioRaiseVolume) command instead of amixer set 'Master' 10%+. The only difference/downside I notice is that even Repeat Keys on Typing tab is set this doesn't apply to this custom shortcut.
However, I wasn't able to do some things like use Fn+F7 to turn off my screen (xset dpms force standby). It doesn't detect it as a shortcut event.
You could probably use xmodmap to reassign the buttons on your headset to the same as you use for your keyboard.
Yes, but now I need to know what is the default action of 'Audio raise/lower volume', from the Keyboard shortcuts application. Because, when I use my own command 'amixer channel set opt', it raises/lowers the MBO sound card, while I'm listening with my headset.
@ksemeks : the actions are : amixer set Master 10%+ to raise the volume amixer set Master 10%- to decrease the volume. To get the gauge notification, you could use notify-send, but I am not sure how to set the gauge
@danjjl: well, that's why I'm trying to find the exact action of the Volume up/down, from Setting->Keyboard->Shortcuts
update for ubuntu 20
have to say those days, dconf-editor is very easy to use, really nice.
0. preview
1. location input
folder too deep ? copy & paste location.
click title bar
or
Ctrl + F
or
Ctrl + L
to active localtion bar input:
/org/gnome/desktop/wm/keybindings/switch-to-workspace-up
/org/gnome/desktop/wm/keybindings/switch-to-workspace-down
2. search in folder
many items in folder? search it.
NOTICE: this is different with Ctrl + F
search result:
3. bookmark
mark the location you often use.
4. changes
Before applying the modifications,
It even intimately shows the difference.
5. default value
you can use default value,
and keep your changes in setting (but not effective )
//Refer the previous img.
It's awesome, perfect user experience
|
STACK_EXCHANGE
|
ACA Connect is the place for members to discuss camp-related topics with their fellow camp and youth development professionals. The goal is networking, information and idea sharing, and engagement. Here are some community guidelines. Please make sure to review the Code of Conduct for all the community rules. We appreciate your cooperation in making this a place of comfort and achievement.
Participate. Don’t just lurk. Share your information and insights. We all gain when we all participate.
Forum etiquette: There are some standard principles for optimal value in a forum. We ask you to observe them.
- Search before posting. Look through the existing threads to see if your subject exists before starting a new thread.
- Use a simple and meaningful subject line. “I need help” is not useful. Instead: “I need help finding registration software.'
- Stay on topic. If you want to discuss something that is not related to the subject line, start a new thread. (But, search first!)
- Duplicate posting. Avoid posting the same message in your local community and the member open forum. It runs the risk of splitting a conversation into two disconnected threads. If the topic can be best served and answered nationally, please post your message to the member open forum.
- If you feel someone in the Forum is misbehaving, don’t address it in the Forum. Notify ACA by using the Contact Us form.
- Be patient and tolerant. We were all newbies once.
- Be hard on ideas - but easy on people.
- No personal attacks, flame wars, threats, or rants.
- Respect diversity.
- Consider reading your post out loud before posting it.
- Say please and thank you.
- Be brief and clear.
- Don’t shout by using ALL CAPS. It is harder to read and considered rude.
- Use good grammar and check your spelling. Don’t use slang (wassup), shorthand (u r for you are), and explain technical terms, jargon, abbreviations and acronyms.
- Use language that would be appropriate in mixed company. Don’t curse.
- Be careful about humor and sarcasm. They are easily misunderstood online.
- Make sure it is appropriate for the camp community and the discussion group in which you are posting.
- Do not plagiarize or post copyrighted material. Summarize and refer.
- Cite references or post links to supporting material when appropriate.
- No commercial activity, spamming.
- Content is king. Avoid meaningless posts like ‘ditto’ and ‘Yes’ and ‘Me, too” unless the original poster has specifically asked for multiple opinions. (A better approach is for the original posters to ask for opinions to be sent directly to them, and then they repost with a summary of responses.)
|
OPCFW_CODE
|
Inheritance in Kotlin
A free video tutorial from Tim Buchalka's Learn Programming Academy
Professional Programmers and Teachers - 842K+ students
4.5 instructor rating • 50 courses • 849,025 students
Learn about inheritance
Learn more from the full courseLearn Kotlin and Create Games Using the LibGdx Library
Become a real games programmer. Create Games Using Kotlin with the LibGDX Game Development Framework.
21:05:55 of on-demand video • Updated February 2020
- Learn how to create your own games
- Understand how to write reusable code that can be reused in other games
- Learn how to create your own tools for game development
- Have learned the Kotlin language
- Understand how to use many useful design patterns
English [Auto] In this video you will learn what is inheritance and how to use inheritance inheritance is one of the fundamental attributes of object oriented programming. It offers you to define a child cause that re-uses or in other words inherits or extents or modifies the behavior of parent class the class whose members are inherited is called the base or the superclass. The class that inherits the members of the base class is called the then invite class or a child class. So class can only inherit from a single class. However inheritance is transitive which allows you to define an inheritance hierarchy for a set of types. In other words for example type D can inherit from Type C which inherit from Type B again which inherits from the base class type A because inheritance is transitive. The members of type A are available to type D. Inheritance is used to express an is a relationship between a base class and one or more child costs where the child classes are specialized versions of the base class. They divide the class is a type of the base class. For example the enemy class represents any kind of enemy and the pike and it passes represent specific types of things. Now all these can seem a bit confusing. But it is really easy. You'll see through examples. So let's say our game has different enemies. For example pikemen and Archer they both have some common behavior all all. For example bolt and pull but they can also take that weapon. The common behavior goes to the base class or bench press. So let's create our base class for our enemies in all its enemy. The classes will be pikemen and such. So let's first remove our code inside the main and let's remove other classes that we have inside this fight. So first is to declare a class in class and I think it has a VAR health title in the draft Can you cast a BORGU weapon. Which type. And now in this class we will use int to print that in it is called. So in that block in interlock we will just kill him and Sprint and I mean and he called and now we will at a function at that point. But since both our classes will be able to tell I am not taking a weapon let's use those weapons. So we print the weapon that the class is using or our enemy is used to extend the class in Codlin views. So to be able to extend some class important we need to Mark Klaas with open Keyworth that makes us open for extending by default in thing. Every class is fine. And we can't extend it. They actually only open classes can be extended. So we need to create class pikeman first. So now when we create class pipeline it needs Panamic help as in and it will have our. So what are they asking. And now we want to extend it. I mean we just use all the all constructor of and then we going to base our health parameters and parameter for Weapon B. And now we have these curly braces. It costs a lot but we have the compilation error on me. And notice the type is final. So it cannot be in kids that we can't extend. And I mean unless the market is open so important we need to Mark Klaas open to be able to extend So for calling parent constructor here we are calling parent constructor and me which is also a primary constructor and our department as well has a primary constructor parameter killed and parameter so pipe in his arms and let's it ynot the department foot in it and pikeman in it so we can see in the causal the flow and what gets old first. So now we need the. So let's create our clocks are just. We both have health. We will proceed to consider and it will come up again we are explaining the enemy and at the same time calling the parents constructive withheld and not is that pop up. So the outcome will be our health. But I'm a third of it will be both. And again we will just add in a lot of insight into what are called so to see a ghost writer he or we will use a class hierarchy top but first note is there when we have open class there is a little icon. Oh and the saying is subclassed by our child. And so when we click on this i can we can see Argile and pikemen are child classes of the enemy class and we can just click on one of them and our cursor will move to that cause. So we see the quest kiter for us. We can just click on our triplets and go to navigate and there is a class hierarchy or a type hierarchy which you can show with control. So you can see that our child had a parent enemy and enemy has a parent object from Java and link back. But in courting all classes have a common superclass and this is called a me and that is default super for a class with not super types the class. But let's see a subtypes hierarchy. So Art doesn't care any subtypes. But if we click on any and again we need to go to class hierarchy on press control page. Now you can see that enemy has achieved pikemen and now it's closed is hierarchy. And inside the main method we're going to create pikemen And and so while the bike went in constructor we have the best health and armor so we can to be handled and will be one by one. Got a tech so we can call a tech method. Even with this tech method is the quote enemy says we are extending enemy with pikeman. We can also inherit and we are automatically inheriting and having access to all the methods and all the attributes of them including all the properties that are easy to part with help under it and let say 5 will be a recount. And then we can again call thought that. So let's run the code and see control the flow of our code. So in console first you will see enemy in court. Why is the enemy called First Cause. Well we are calling the pikemen construct. So if you contemplate on the pikemen constructor you will see that we are extending it to me and that means we are also calling the enemy construct. So again called an enemy constructor. When this constructor gets executed. Unit block is X so that is why the enemy in it is called first then the pikemen in is called and when we are calling a tech that it prints are taken with back with the true. It is the same principle plinths enemy in it called Archer. It's called and attacking with both. So this is inheritance where we are extending the base class LME and you will notice that if we type pikeman don't care access to health armor and weapon be torture. We have access to public health but we don't count on because Arment is just declared property inside the pipe. But health and property repoint are shared. Actually they are not shared. They are inherited in bold. Of course I will see you in the next video.
|
OPCFW_CODE
|
Restructure Tablespaces; merge datafiles
I'm very new to this whole DBA thing, so I apologize if this is a redundant question. I am working on 9i databases.
We have a couple of databases which need some restructuring. The structure of the tablespaces are fine, but the issue I have is with the unnecessary amount of datafiles within each tablespace (for example we have one tablespace with 8 datafiles totalling 16GB although only 5GB is used in this tablespace...and each datafile is at most 30% utilized). I'd like to combine the datafiles but I am not completely clear on the best process in which to do this.
From the research I have done, it seems as though I should do a tablespace export, drop the tablespace, recreate the tablespace with the amount of datafiles and the size I am seeking and do an import. This sounds too easy to be honest. I primarily use OEM, and have exported a db through OEM, but I am not sure if doing an export/import using OEM is the right way, or if there are extensive scripts I need to create for this.
are you sure the datafiles arent spread across devices balancing out i/o
No, sorry I should have been more specific. Both these db's are on one Liniux partition. What has happened is a massive cleanup effort and too many datafiles were created from the get-go (I did not create these db's). We'd really prefer to have these down to one or two per tablespace. Not to mention some of these tablespaces will not grow as one of these db's is more like a staging database for our application configuration.
Another way would be to create a new tablespace with the appropriate datafiles. Then do
You'll also need to rebuild your indexes at that point.
Alter table table_name move tablespace new_ts_name
Personally, I like this better since you never delete your data.
yep, concur with that - safe and easy (and quicker)
Support to Jodie and Davey, althoug the imp will do as well.
Just using this method will end up with your objects mooved to a tablespace with a name, different from the initial one, since you will have to create a new tablespace and moove the objects to it. The new tablespace however will have to be with different name, and you do not have a rename tablespace in 9i. I personally do not see any problems with that, but still to mention that difference
Thank you everybody.
I might try out the "alter table" and move it to a temp table space, drop the current, recreate with the correct amount of datafiles and do the "alter table" once again. I need to keep the tablespace(s) with the correct name since we have a great deal of scripts/nightly jobs reliant on this. We would also like to keep the naming consistent with all our databases.
Does this sound like a sound plan? (Can you tell I'm new at this and have no confidence what-so-ever!!)
Another quick question...we have a separate table space dedicated to our indexes, so if I were to do this on both the tablespaces would I still have to rebuild all indexes? Wouldn't the objects essentially be as if they were not touched?
alter table move will invalidate your indexes, you have to rebuild them
Pablo (Paul) Berzukov
Author of Understanding Database Administration
available at amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
OK, I tried. )
I will rebuild them.
Again, thank you everyone for your help!
Click Here to Expand Forum to Full Width
|
OPCFW_CODE
|
32,000 page views!!
What do you think of my new Pi2B with its Pibow Coupé Flotilla case from Pimoroni - I think it's even snazzier than the Pi B+ in its red Pibow Coupé case!
This is just a quick description of a means of making a Graphical User Interface (GUI) using Python (with tkinter, a Python library for drawing graphics) to show and control the status of the GPIO ports on the Raspberry Pi.
Here's a screenshot, taken on the Pi using the scrot program's command:
scrot -cd 10 -u captureKC.jpg
This is showing that I have selected GPIO pins 23, 24 and 25 as outputs, and when I click on the respective tick-boxes to send these Outputs to High or Low, the RGB LED anodes are activated through the three 330Ω resistors (Ideally 3 different values of resistors should have been used to get pure white when all are on).
The link to information about scrot is here:
I have temporarily (only temporarily, I hope) lost the ability to run the Pi headless, (and I can't connect to WinSCP to transfer files from the Pi to the PC!!), so I had to use scrot to capture the window on the Pi, and I exported this file to the PC using a memory stick!
I should probably at this point explain what the above command does: -cd 10 gives a 10 second countdown before the image is captured, to allow time to put the focus on the correct window, and -u tells it to capture the current active window, rather than the whole screen. The file captureKC.jpg is written to the current directory of the Pi, in this case, the Home Folder, where my program is.
The reason for the 10 second delay is to allow time for me to leave the current active window (the LXTerminal) and click on the window I wanted to image.
The Python GUI was written by scotty101 and the code is shown below:
You can see that scotty101 intends to put more functionality on this code. Thanks Scotty!
Here is a picture of the very simple connections to the Pi:
You can just see the 330Ω resistors on the mini breadboard connecting the RGB LED's three anodes to the Cyntech B+ 40-way Paddle Board (previously described in my Post 55 at http://smokespark.blogspot.co.uk/2014/10/55-exploring-pis-gpio-ports.html). The RGB LED (with all three colours illuminated, as indicated in the screen-shot above) has been covered with a light diffuser (Draft Guinness widget - much better than a table tennis ball).
What I would like to do with this is to develop it to include GUI control of the RasPi Camera. That would be really coooool !
|
OPCFW_CODE
|
Just last week Michael Deutch, Mindjet's Chief Evangelist wrote a piece about some of the updates to MindManager Web which bears looking at. Michael shared that they have added over 100 usability features to MindManger Web. Some of these new features include:
* Use shortcut keys for working more efficiently
* Save maps locally and share with other MindManager users
* Edit and Replace documents in your workspace
* Invite ANYONE to instant meetings (invite other account members or anyone else with their email)
* Print preview improvements
* And 100+ additional usability improvements to enhance your mapping experience
While I usually have access to a computer or laptop with MindManager 8 installed there have been a couple of times this year when I needed access to MindManager 8 and was not in front of my computer. In this instance, having access to MindManager Web made it easy for me to map out the material from the computer lab at the college that I teach at. It was a great feeling being able to access my mind map from the web and to be able to format it just the way that I needed to. I have found MindManager Web is very responsive and gives you all of the essential tools that you need to get the job done. I could easily add markers, relationships, and format my map as if I was running MindManager 8 from my desktop computer. To say the least, my team of educators was very impressed with the outcome-which we later used as part of our presentation. Having access to my mind maps in my Workspace from virtually any computer connected to the web is really a fantastic feature.
I was really intrigued with the new Instant Meeting feature which would allow me to invite not only individuals that are part of my account, but anyone with an email address. I decided to give that feature a test drive and invited my wife, into an Instant Meeting. Once I started the session, I entered my wife's email address, MindManager Web ran a small viewing application and I was off and running. My wife opened her email and clicked on the link that was provided and again a small Instant Meeting applet ran. Once this was complete my wife could see my screen and the map I was working on. MindManager Web worked very smoothly and now gives me the freedom to share my maps and other documents with others who do not have a Mindjet Connect subscription. I think that you too will find MindManager Web to be an extremely handy tool when you are away from your laptop or desktop and need access to your maps. You will find MindManager Web very responsive and an aesthetically pleasing environment to work within. Check out the improved Instant Meeting feature in MindManager Web when you need to share mind maps or other documents with your colleagues. Give the Instant Meeting feature a try and let me know how it goes.
|
OPCFW_CODE
|
Introductory statistics at minimum. p-values, different significance tests, precision vs. accuracy, sampling and bias.
Basic Computer Skills
Speed and use of keyboard shortcuts to the point that a mouse is unnecessary for 70%+ of tasks
Understand filesystem, ability to navigate a unix shell. https://en.wikipedia.org/wiki/List_of_Unix_commands
- ls, cd, pwd, tail, tee, strings, rm, patch, nohup, more, mkdir, man, ls, ln, kill, grep, find, fg, echo, du, diff, df, dd, date, cut, cp, chown, chmod, chgrp, cd, cat, bg, alias, sudo, umount
- extra credit: sed/awk, emacs/vi, curl, which, mkfs, lsblk, blkid
Any Programming Language
Pick one and learn it. Be able to implement classic algorithms.
- fibonacci, GCD, primes, sorting, summation, …
Hopefully understand the classic data structures too.
- contiguous array, linked list, tree, graph, …
Basic knowledge useful for solving common puzzles.
- Information theory
- rotation ciphers
- substitution ciphers
- Base 16
- Base 2
- JPEG artifacts and image manipulation
basic understanding of
Biology and Anthropology
Overview of Biology
AP/University intro level… must be taught by a sentient being that has engaging examples.
Human Behavioral Biology
Strengths and weaknesses of: evolutionary biology, molecular genetics, behavioral genetics, neuroscience, endocrinology (& microbiome) as they apply to human behavior.
esp. leadership structure and processes
Cambodian Genocide Khmer Rouge
Including Anti-Intellectualism and “confessions”
All of these need to include context, of course…
McCarthyism and Red Scare
Witch hunting and demagoguery.
Salem Witch Trials
speaking of witch hunts
when the government accuses you of being a traitor and executes you because it doesn’t like you
Military occupation/invasion means your city is filled with a bunch of rude, tense males that have been told that your country is evil.
Bombing of Hirojima and Nagasaki
SPECIFICALLY the reasons the president had for it, and looking at US-USSR relations.
Indian Removal Act of 1830
“A pile of American bison skulls – they were hunted almost to extinction in the 1870s. The United States Army encouraged these massive hunts to force Native Americans off their traditional lands and into reservations further west.”
Note the race tensions as a result of German and Belgian colonialism and preferential treatment of the Hutu.
|
OPCFW_CODE
|
R language packages for anaconda anaconda documentation. Lubuntu is distributed on three types of images described below. Lts stands for longterm support which means five years, until april 2025, of free security and maintenance updates, guaranteed. Download the latest lts version of ubuntu, for desktop pcs and laptops. The aim is that an educator with limited technical knowledge and skill will be able to set up a computer lab, or establish an online learning environment, in an hour or less, and then administer that environment without. You will need at least 384mib of ram to install from this image. The desktop cd allows you to try kubuntu without changing your computer at all, and at your option to install it permanently later. Most powerpc isos are in a ports directory, but for others e.
Show new distribution releases to normal releases, update 14. If a monotonicity direction isotonic or antitonic is not specified for an ordinal. The desktop image allows you to try ubuntu without changing your computer at all, and at your option to install it permanently. This means it is supported for 5 years with critical security, bug and app updates from canonical, the company that makes ubuntu. Jonathan riddell has announced the release of kubuntu 14. I want to download a 64 bit ubuntu version for a toshiba, not a mac. The edubuntu development team will also provide point releases in sync with ubuntu to offer you new installation media containing all the latest. Experience a new ubuntu version named focal fossa with new themes, apps, updated kernel and much lighter and faster, we have screenshots and features to compare. The kde desktop is represented by the plasmadesktop package. It might not work well on certain hardware components, especially proprietary graphics cards. The xubuntu team is pleased to announce the immediate release of xubuntu 14.
At the moment, the kubuntu plasma5 operating system is in heavy development, which means that its not ready for production use. Alternative downloads, torrents, mirrors and checksums. There are several other ways to get kubuntu including torrents, which can potentially mean a quicker download, and links to our regional dvd image mirrors for. Longterm support means that bug fixes and security updates.
Using the list of official cd mirrors and a bit of ingenuity traversing the mirror directory structures for example, click on the parent directory or link you can probably find more mirrors that contain powerpc releases. It includes all the kde packages and applications that are distributed as part of the kde applications 15. It is a standalone graphical application which allows one to search and download youtube videos. Download a bootable image and put it onto a dvd or usb. The r language packages are available to install with conda at. Elsewhere, theres mozilla firefox 28, an all new drivers manager to help get all your hardware set up and. When i download what the download site says is a 64 bit version not the mac version, its properties still read as 64 bit amd. If you dont have ubuntu, i recommend trying out ubuntu 14. The goal is to provide a very lightweight distribution, with all the advantages of the ubuntu world repositories, support, etc. Download apps for ubuntu and linux including apps for project management, photo editing and plenty of software alternatives for popular windows only apps. Service packs are a windows concept, not a linuxunix concept.
Ubuntu is distributed on four types of images described below. Kubuntu is distributed on two types of images described below. Although it is part of the smplayer project, it can be used with any multimedia player such as mplayer, mplayer2, vlc, totem or gnomemplayer. Lubuntu is a flavor of ubuntu based on the lightweight x11 desktop environment lxde, as its default gui. The desktop image allows you to try kubuntu without changing your computer at all, and at your option to install it permanently later. The latest long term support lts version of the kubuntu operating system for desktop pcs and laptops, kubuntu 20. In this post well give you a quick overview of whats new and improved. Download latest eclipse from here download eclipse step 4. Gof test with pvalue calculation based on marsaglias 2004 paper evaluating the. As the second long term support release of the edubuntu, this version will be supported for 5 years, until april 2019. Choose this if you have a computer based on the amd64 or em64t architecture e. Expect to see a more stable desktop and the latest core kde software. Download ubuntu desktop, ubuntu server, ubuntu for raspberry pi and iot devices, ubuntu core and all the ubuntu flavours. Ubuntu is distributed on two types of images described below.1120 1117 645 1468 628 1387 414 1153 844 925 457 1262 297 1452 172 856 1304 1164 268 664 948 1422 1358 1149 1197 851 176 1041 141 1390 431 1374 222 207 1103 53 92 469 394 516 1308 975 718 908 1012 81
|
OPCFW_CODE
|
How do you access a wrapper class from a Test class?
I have a VisualForce page which uses a controller. Within the controller I have a couple of wrapper classes which hold various form elements on the page. Everything works great, but now it is time to write some test code for it...
I am not able to access the wrapper class from my test class. It errors out saying Invalid type: someWrapper. The wrapper is a public class but I don't think that I have properly instantiated the wrapper class within the controller class.
Here is a general look at what my controller looks like:
public with sharing class someController
@TestVisible private List<someWrapper> wrapperList {get; set;}
public class someWrapper {
public Opportunity someOpportunity {get; set;}
public Boolean someBoolean {get; set;}
public someWrapper(Opportunity opp){
someOpportunity = opp;
someBoolean = true;
}
}
}
And here is my test class. It errors out when I am trying to for loop through the wrapperList list.
someController controller = new someController();
for(someWrapper thisListItem : controller.wrapperList){
// Do stuff
}
I realize that I must have to instantiate the wrapper class somehow, but I can't figure out how to do it properly.
I have tried the following:
controller.someWrapper wrapperForUseInTest = new controller.someWrapper();
someWrapper wrapperForUseInTest = new controller.someWrapper();
class wrapperForUseInTest = new controller.someWrapper()
Any help would be greatly appreciated!
Although my wrapper class was defined as public in my controller, I still needed to specify @TestVisible on the controller. Once I did so, it became visible to my Test class.
public with sharing class SomeController
{
@TestVisible private List<SomeWrapper> wrapperList { get; set; }
@TestVisible public class SomeWrapper
{
public Opportunity someOpportunity { get; set; }
public Boolean someBoolean { get; set; }
public SomeWrapper (Opportunity opp) {
someOpportunity = opp;
someBoolean = true;
}
}
}
I also needed to reference the wrapper as a method of my controller. So my for loop had to be modified, like so:
SomeController controller = new SomeController();
for (SomeController.SomeWrapper thisListItem :controller.wrapperList) {
// Do stuff
}
n.b. @testVisible is a V28 (summer 13) feature; for previous Versions, you could simply specify in your testclass `someController.someWrapper'
Thanks for the added note. To confirm this, I changed to 27.0 and removed the @TestVisible.
Can you instantiate a list of wrappers in the same way? I'm having trouble getting this to work in my test class -> List<someExtension.someWrapper> localList;
You should be able to use someWrapper as a list type, just make sure that someExtension is the name of the actual class in which the wrapper resides, NOT the variable name used to hold the instantiated class. I haven't had a chance to test this, but in this case you would use: List<someController.someWrapper> localList;
|
STACK_EXCHANGE
|
How I open a form at a time under MDI parent form?
I have a MDI form. Within this MDI there is multiple button to open new forms. Let buttons are btn1, btn2, btn3, btn4.... When I press btn1, form1 is load. when I press btn2, form2 is load... Now I press btn1, And form1 is loaded. If I press again btn1 then another form1 is open. Simultaneously let form1 is open, if I press btn2 form2 is open. But I want to open a form at a time. How I prevent this?
did you mean there should be only one instance of form1, and one instance of form2, and so on.. ????
all the answers you got are good so i'm not going to repeat them, just give you an example of the member and method you can use to prevent that from happening:
private Form frm;
private void button1_Clicked(object sender, EventArgs e)
{
if (frm != null)
{
frm.Close();
frm.Dispose();
}
frm = new Form1();
frm.Show();
}
private void button2_Clicked(object sender, EventArgs e)
{
if (frm != null)
{
frm.Close();
frm.Dispose();
}
frm = new Form2();
frm.Show();
}
@AnimeshGhosh anytime
You can read up about mutual exclusion http://msdn.microsoft.com/en-us/library/system.threading.mutex.aspx
It is a general solution to make sure you only have 1 thing (thread, process, form, whatever) of something at the same time. You can even use it inter application.
An example is shown here: http://www.dotnetperls.com/mutex
You can create multiple mutexes, one for each form. Or one for a set of forms, in what ever combination suits you.
Example Scenario:
Form1 creates a mutex with name X
Form2 is being loaded checks whether mutex X is created, if so it closes itself.
Of course you will need to make sure the mutex is Disposed / released when the creator (Form1 in this example) closes, to allow other forms to show.
You can use some flag for this purpose OK, like this:
bool formOpened;
private void buttons_Click(object sender, EventArgs e){
if(!formOpened){
//Show your form
//..............
formOpened = true;
}
}
//This is the FormClosed event handler used for all your child forms
private void formsClosed(object sender, FormClosedEventArgs e){
formOpened = false;
}
At least this is a simple solution which works.
In general case, you need a int variable to count the opened forms, like this:
int openedForms = 0;
//suppose we allow maximum 3 forms opened at a time.
private void buttons_Click(object sender, EventArgs e){
if(openedForms < 3){
//Show your form
//..............
openedForms++;
}
}
private void formsClosed(object sender, FormClosedEventArgs e){
openedForms--;
}
:: your code is working. But there is a problem. I open form1 by using btn1. When i close it and re open it then it does not opened.... So pls help me
@AnimeshGhosh you have to register the formsClosed handler with the event FormClosed of all the child forms including form1. Something like this form1.FormClosed += formsClosed;
Does this mean while you have Form1 open, you want to still be able to open a Form2 and 3 and etc?
It you don't want that, you can use the form1Instance.SHowDialog() instead of Show()...
But that generally means you can't access the parent form while form1 is open...
But King King's anwser might be more useable to you.
|
STACK_EXCHANGE
|
DO NOT ALLOW PUBLIC ACCESS TO THIS PROGRAMME
DBMA does not require any special permissions and should run with the HTTP server's
user and group credentials. (i.e.: www:www or nobody:www or whatever you are using
-- see your httpd.conf file or ask your Senior Sys Admin or team lead).
Who says, 'We're secured.'?
Perfect intranet admin system security is a worthy but an unattainable goal.
Every circumstance has its own unique advantages and deficiencies
including public physical plant access, ex-or-disgruntled employees, pranksters and
penetration from outside.
Securing any system is a living, dynamic process -- checking for and applying operating
system updates, program fixes and patches, scanning programme revisions for desirable
feature additions, reviewing user security and permissions, and generally applying common sense.
Whatever secure environment you use for your enterprise or network management,
using your own privileged user and permission sets, would be a good place to
use DbMail Administrator (DBMA).
If you do not yet have such resources and are installing DBMA into a new or
existing Apache Server on your workstation, the following notes might be helpful.
Think about your database security.
Security of your MySQL or PostgreSQL database is outside the
scope of this article but it should be noted that the username and password
are set in a small flat-file database which is NOT world readable.
Make sure there is no unauthorized access to the namespace in which DBMA resides.
Also, the credentials you use should only have access to the dbmail database
and should not have GRANT permissions, in fact should only have minimal
read/write permissions for the DbMail Db. That's all that is needed.
Some simple "How To's"
1) Configure your HTTP server to listen on localhost or its
non-routable LAN IP address, preferrably on a non-standard port. If you run
your system administration and enterprise management on a VPN, see
your Enterprise Manager and get the needed approvals and configuration details
before installing this product.
2) This discussion will assume that:
i. LAN is secure; firewalled; your admin workstation in a controlled environment.
ii. The operating system has been secured and unnecessary services disabled.
iii. The Apache user and group directives are correctly set, appropriate permissions assigned.
iv. The ServerRoot and log directories are protected.
v. User overrides are disabled.
vi. You are using the latest release of your HTTP daemon software (i.e.:Apache]
To Alias or Not To Alias? That is the question.
3) CGI access is a somewhat contentious security topic.
Arguably, the isolated (ScriptAlias directive) cgi-bin in Apache is deprecated because of the
demand for dynamic content throughout the modern site.
Many public sites offer dynamic content these days and most admin intranet sites are
entirely dynamic. You can run CGIs from the web root or any directory you may wish to use, with care.
Ideally you would assign the ExecCGI Option to subdirectories of your document tree on an as-needed basis.
This offers a strong measure of security inasmuch as no one but the creator has knowledge of where
scripts will run and where they won't.
Why not "cgi-bin"? There is little reason why, for one thing.
For another, the ubiquitous script directory is easily found.
The default location of Apache server's CGI scripts is an aliased '/cgi-bin/'.
There is no valid reason to use cgi-bin as a script location and many reasons to remove this
and the "/script" directory from your web space so that html
GETS and POSTS from automated web script malware are issued a "Not Found"
error message and thus are stopped dead in their tracks.
Surely this is not going to be an issue on your LAN IP workstation server, you might think.
It could be in the event of an inadvertency whereby someone makes a mistake on a
firewall IP forward NAT. OK, so that is improbable but lets just say that it is good practice
to make as secure as possible any information or information technology you are entrusted
with. That includes people's emails and email access credentials.
To accomplish this, delete or comment out (with #)the CGI-BIN ScriptAlias in httpd.conf.
#ScriptAlias /cgi-bin/ "C:/apache2/Apache2/cgi-bin/"
# AllowOverride None
# Options None
# Order allow,deny
# Allow from all
Add ExecCGI on an as-needed basis.
Sample DBMA Directory Config in httpd.conf file
Options MultiViews ExecCGI
Allow from 192.168.100 10.10.10 localhost mydomain.net
Don't be hum drum. Get creative with your CGI file extensions
4) Extensions are dead giveaways.
Although this is an unlikely concern on your LAN workstation, you might
enjoy knowing you have taken one, extra, clever step toward providing
enhanced security for your administrative GUIs.
I could easily argue that .cgi is a deprecated extension by using
widespread common practise as a case in point.
Public site webmasters avoid it like the plague. And why not? Every
'script-kiddie-wannabe-thug' on the planet has tried to exploit
something.cgi at one time or another and there are now a gazillion
computers infected with any one or more of a gazillion malware types
scanning net blocks for something dot pl or dot cgi.
CGI files are most often PERL or C. The more information you give up about your server,
the more vulnerable it becomes.
You don't need to use .cgi as an extension. You can use any non-standard
extension your heart desires as long as you tell your HTTPD daemon how to
handle the file when a request is received. You can also use as many different
extensions as you like. Just tell the server what they are with AddHandler instructs.
Once the HTTP Daemon reads its correctly configured AddHandler instructions,
it will handle files with your non-standard extension accordingly.
The Common Gateway Interface (CGI) was a Noah's Arc standard for communication
between a program or script, written in any one of several languages, and a
Web server. The CGI specification is very simple: input from a client is passed to
the program or script on STDIN (standard input). The program then takes that information,
processes it, and returns the result on STDOUT (standard output) to the Web server.
The Web server combines this output with the requested page and returns it to the
client as HTML. CGI applications do not force the server to parse every requested page;
only pages containing CGI-recognized arguments involve further processing. As long
as your server knows which files to process as CGI, it will.
You are not going to be exposed to atackers inside your LAN, you may think.
And you most certainly are not allowing DBMA or any Admin application to be
run on a public-access server. So it's a best practises thing if you can find
no other reason. Well, you know your situation best and will take the security
measures commensurate with your circumstance.
This is certainly not a "must do" scenario.
To change the file extension of your DBMA CGI executeable scripts , use the following or
something like it in your httpd.conf file
AddHandler cgi-script .dbma #or whatever extension you wish to use.
Next rename all cgi scripts in your package to have a dbma
(or whatever you choose) extension.
>mv *.cgi *.dbma
Password Protection Is Important; A "Must Do"
5) Password protection has considerable value.
I suggest that you do password protect DBMA. The inconvenience is inconsequential
and the professionalism of doing so is good for your image among your
peers, and more importantly, the boss. You can password protect your entire
intranet web space or selectively
protect different name spaces with different authentication levels.
Sample htaccess config in Apache's httpd.conf file
<Directory "/usr/local/www/dbmailadministrator"> # change for your system
Options ExecCGI MultiViews
Allow from all
AuthUserFile /usr/local/www/dbmailadministrator/.htpasswd # change for your system
AuthName "Administration Only. Enter username and password."
Change the namespaces to the correct location on your system.
TO CREATE A PASSWORD FILE
>/usr/local/bin/htpasswd -cb .htpasswd user secret
Produces .htaccess containing something like this:
(password is dbmail)
You may have some additional thoughts.
There certainly are additional dimensions.
For example, Apache comes bundled with its own security CGI wrapper
application called suEXEC. suEXEC allows users to run CGI and SSI
programs as the owner of the site as opposed to the owner of the httpd process.
This is not needed for DBMA V2.0 as it can run as the site's user:group with read
permissions on all but the scripts which need executeable access.
If you are using DBMA V1.1 you may wish to build it into a CGI wrapper.
You can read more about suEXEC at http://httpd.apache.org/docs/suexec.html.
Feel free to ask for help or make your comments.
M. J. [Mike] O'Brien ~ Email
|
OPCFW_CODE
|
#ifndef ACTIONNODE_H
#define ACTIONNODE_H
#include "EventNode.h"
template <typename Event_type>
class ActionNode
{
public:
ActionNode();
ActionNode(Event_type action, std::unique_ptr<EventNode> event);
EventNode* getNextNode();
Event_type getEvent();
ActionNode ( ActionNode& rhs);
ActionNode& operator= ( ActionNode rhs);
protected:
private:
Event_type m_action;
std::unique_ptr<EventNode> m_nextNode;
};
template <typename Event_type>
ActionNode<Event_type>::ActionNode()
{
}
template <typename Event_type>
ActionNode<Event_type>::ActionNode ( ActionNode<Event_type>& rhs) : m_action(rhs.m_action), m_nextNode(std::move(rhs.m_nextNode))
{
}
template <typename Event_type>
ActionNode<Event_type>& ActionNode<Event_type>::operator=( ActionNode<Event_type> rhs)
{
m_nextNode = std::move(rhs.m_nextNode);
m_action = rhs.m_action;
return *this;
}
template <typename Event_type>
ActionNode<Event_type>::ActionNode(Event_type action, std::unique_ptr<EventNode> event) : m_action(action), m_nextNode(std::move(event))
{
}
template <typename Event_type>
EventNode* ActionNode<Event_type>::getNextNode(){
if(m_nextNode){
return m_nextNode.get();
}else{
return NULL;
}
}
template <typename Event_type>
Event_type ActionNode<Event_type>::getEvent(){
return m_action;
}
#endif // ACTIONNODE_H
|
STACK_EDU
|
Add the Xpress Optimizer to CVXPy
Allow for solving linear, conic, and quadratic optimization problems, with or without discrete variables, with the FICO Xpress Optimizer. The Xpress problem class also allows for retrieving Irreducible Infeasible Subsystems (IISs) in the event that the problem is infeasible.
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.Pietro Belotti seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
Looks really good on the whole! I have some comments/questions. Remember to check your code style with flake8.
Thanks for making it flake8 compliant! I'm happy to merge as soon as you get it passing travis-CI. looks like it breaks now because you import xpress in the wrong place.
Coverage decreased (-1.7%) to 87.761% when pulling e0b949880afa2c5e0bb43adf326c69595bfab5ff on merraksh:master into<PHONE_NUMBER>f14e47d2289af8d47938ea5ca70209 on cvxgrp:master.
Looks good! Thanks!
Great! Thanks.
I'm adding an Xpress interface to the very soon to be released 1.0 branch. I'm trying to get a license to test it but you guys could test it/fix it if you like.
After downloading the 1.0 branch and setup, a python setup.py install seems
to run fine, but then upon import cvxpy this is what happens:
File "/home/pietro/code/cvxpy-1.0/cvxpy/CVXcanon/python/CVXcanon.py",
line 247, in LinOp
swig_setmethods["data_ndim"] = _CVXcanon.LinOp_data_ndim_set
AttributeError: 'module' object has no attribute 'LinOp_data_ndim_set'
On cvxpy 0.4.11, import cvxpy works fine.
On Tue, Sep 19, 2017 at 1:23 AM, Steven Diamond<EMAIL_ADDRESS>wrote:
I'm adding an Xpress interface to the very soon to be released 1.0 branch.
I'm trying to get a license to test it but you guys could test it/fix it if
you like.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/cvxgrp/cvxpy/pull/382#issuecomment-330392644, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AARvgglq0u5Ptjd0poPTSCVBw7ga2oieks5sjwmZgaJpZM4OssFU
.
I second the bug report by @merraksh .
Here's a solution: install CVXcanon via git.
pip3 install --no-use-wheel --verbose -e git+https://github.com/cvxgrp/CVXcanon.git#egg=cvxcanon
Hi guys, I did a major clean up of CVXPY which included migrating all the pre-1.0 style solver interfaces to 1.0 style interfaces, in order to remove lots of deprecated code. I was not able to install the XPRESS solver so I couldn't test the new XPRESS interface. If you're still interested and able to test it (or help with installing XPRESS), I'd really appreciate. I had to simplify the interface in order to fit it into the new format.
|
GITHUB_ARCHIVE
|
This app was mentioned in
with an average of
Agent can do this and has a few other related features you might like as well.
Edit: Ignore link below. Here is the right app.
Agent Not only can you set when you are "asleep", but it will also quiet your phone in meetings (pulled from your calendar), remember where you parked (automatically when disconnecting from car's BT), and if you ask it to, it will even read your SMS aloud to you while you drive. It is not a free app, but one of my Must Haves!
Agent is a great app for Android
It enables you to allow certain people to call, different quite times per day, silence linked to calendar and so much more.
Agent, or Trigger.
From the same developer, actually.
(Didn't find those too useful as I have a Moto X 2013 myself)
Try Agent https://play.google.com/store/apps/details?id=com.tryagent or Trigger https://play.google.com/store/apps/details?id=com.jwsoft.nfcactionlauncher
Agent is easier to use, just tell it which calendars. Lots of other settings also, like days of the week and time ranges (so it doesn't mute your tablet when you actually want it need to hear it).
Trigger requires a lot more steps to get to the solution, but it can do more.
I used to do this with an app called Agent. I haven't used it in a while but it worked when I used it. Agent
link me: Agent
edit: I don't know what's up with that bot but it's just agent. not airwatch agent
I use an app called Agent because Tasker is way over my head. I've tried to make sense of it several times over the years, but can never get even a single simple thingy to work. Anyway, Agent is dead simple.
Been using the Agent app for a couple years now. It's a terrific alternative. One downside is that it was free until last month. Still, I've gotten a lot of use out of it so I paid up.
The N6P can react to "OK Google" even when the phone is asleep for all of the basic Google Now tasks (setting timers/alarms, setting reminders, asking how old Tom Cruise is, etc.)
For everthing else, Agent is what you're looking for. It's much simpler than tasker, and almost exactly like the motorola app that I remember from my brief sting with the Moto X. I found it shortly after switching from the X because I missed those features.
If Google Now doesn't work for you, Agent (https://play.google.com/store/apps/details?id=com.tryagent) is another option. Similar concept, but can be triggered with a disconnect from your car's Bluetooth instead of relying on activity detection.
You can pretty much do anything with tasker. The only limit is your ability to use it or find a good tutorial. If is not very user friendly if you've never used it before.
I can think of two apps that will do some or what you are asking.
Both of these can read messages or any notifications out loud and you can set it up when bluetooth connects. I use Tasker to activate out loud to read notifications when I wear a certain Headset.
Neither of these let you talk back to them, or there might be a better app that does that, but if out loud reads me a text I just say "Ok google" right after its done to respond.
Out loud: https://play.google.com/store/apps/details?id=com.hillman.out_loud
Yep, Google Now. But if you want another app for some reason, Agent dose this among other things .
This has perfect settings for this, including being able to text urgent to get through.
That linkme is wrong. Here it is https://play.google.com/store/apps/details?id=com.tryagent
Have you tried Agent which allows certain calls to get through to you at night.
Wow that was not the right answer at all lol
Hah, that wasn't the right app, here you go
this used to work once upon a time, though i'm not sure how it's doing these days
Maybe Agent or Trigger?
EDIT: Bot got it wrong.
This is the app I was referencing.
This is the app that she was using back then.
Agent might be worth checking out.
I use this and it works flawlessly.
Agent comes to mind.
Agent will do the trick
It's this one
Agent does exactly what you want and is really user friendly!
Try Agent - do not disturb & more
It's a week trial then paid.
Or the Android Auto app
Agent does that as well as a few other automatic situational things and it would be MUCH simpler than Tasker.
I would have never found that thanks. Looks like the bot is on a holiday break: https://play.google.com/store/apps/details?id=com.tryagent&hl=en .
I use Agent and it's been pretty good for my needs.
Can I assume you mean Agent? I'll try it out. Thanks!
|
OPCFW_CODE
|
openstack dns command doesn't work
Hello @gtema,
I'm afraid that the dns enhancement is defect. I believe something is wrong wired :).
$ openstack dns zone list
'ZoneController' object is not callable
$ openstack dns zone show <zone-id>
'Client' object has no attribute 'find_zone'
$ openstack dns zone create --type private --router_id <router-id> test-zone
'Client' object has no attribute 'create_zone'
Which version do you use? I can not reproduce it currently.
I'm using the this one:
openstackclient==4.0.0
openstacksdk==0.51.0
otcextensions==0.10.0
So I renewed the whole virtualenv to update otcextension to 0.10.1:
$ cat requirements.txt
ansible
otcextensions
openstackclient
pip install -r requirements.txt
Successfully installed
Successfully installed Babel-2.8.0 MarkupSafe-1.1.1 PrettyTable-0.7.2 PyYAML-5.3.1 WebOb-1.8.6 ansible-2.10.3 ansible-base-2.10.3 aodhclient-2.1.1 appdirs-1.4.4 attrs-20.3.0 certifi-2020.11.8 cffi-1.14.3 chardet-3.0.4 cliff-3.4.0 cmd2-1.3.11 colorama-0.4.4 cryptography-3.2.1 debtcollector-2.2.0 decorator-4.4.2 docker-4.3.1 dogpile.cache-1.0.2 fasteners-0.15 futurist-2.3.0 gnocchiclient-7.0.7 idna-2.10 iso8601-0.1.13 jinja2-2.11.2 jmespath-0.10.0 jsonpatch-1.26 jsonpointer-2.0 jsonschema-3.2.0 keystoneauth1-4.2.1 monotonic-1.5 msgpack-1.0.0 munch-2.5.0 murano-pkg-check-0.3.0 netaddr-0.8.0 netifaces-0.10.9 networkx-2.5 **openstackclient-4.0.0 openstacksdk-0.51.0** os-client-config-2.1.0 os-service-types-1.7.0 osc-lib-2.2.1 oslo.concurrency-4.3.1 oslo.config-8.3.2 oslo.context-3.1.1 oslo.i18n-5.0.1 oslo.log-4.4.0 oslo.serialization-4.0.1 oslo.utils-4.7.0 osprofiler-3.4.0 o**tcextensions-0.10.1** packaging-20.4 pbr-5.5.1 ply-3.11 pyOpenSSL-19.1.0 pycparser-2.20 pydot-1.4.1 pyinotify-0.9.6 pyparsing-2.4.7 pyperclip-1.8.1 pyrsistent-0.17.3 python-barbicanclient-5.0.1 python-cinderclient-7.2.0 python-congressclient-2.0.1 python-dateutil-2.8.1 python-designateclient-4.1.0 python-glanceclient-3.2.2 python-heatclient-2.2.1 python-ironic-inspector-client-4.4.0 python-ironicclient-4.4.0 python-keystoneclient-4.1.1 python-mistralclient-4.1.1 python-muranoclient-2.1.1 python-neutronclient-7.2.1 python-novaclient-17.2.1 python-octaviaclient-2.2.0 python-openstackclient-5.4.0 python-saharaclient-3.2.1 python-searchlightclient-2.1.1 python-senlinclient-2.1.1 python-swiftclient-3.10.1 python-troveclient-5.1.1 python-vitrageclient-4.1.1 python-watcherclient-3.1.1 python-zaqarclient-2.0.1 python-zunclient-4.1.1 pytz-2020.4 requests-2.24.0 requestsexceptions-1.4.0 rfc3986-1.4.0 semantic-version-2.8.5 simplejson-3.17.2 six-1.15.0 stevedore-3.2.2 ujson-4.0.1 urllib3-1.25.11 warlock-1.3.3 wcwidth-0.2.5 websocket-client-0.57.0 wrapt-1.12.1 yaql-1.1.3
but unfortunately:
$ openstack --version
openstack 5.4.0
$ openstack help | grep otc | grep dns
dns ptr record list List PTR records (otcextensions)
dns ptr record set Set PTR record (otcextensions)
dns ptr record show Show the PTR record details (otcextensions)
dns ptr record unset Delete (restore) PTR record (otcextensions)
dns recordset create Create recordset (otcextensions)
dns recordset delete Delete Recordset (otcextensions)
dns recordset list List recordsets. (otcextensions)
dns recordset set Update a Recordset (otcextensions)
dns recordset show Show the recordset details (otcextensions)
dns zone create Create zone (otcextensions)
dns zone delete Delete zone (otcextensions)
dns zone list List DNS zones (otcextensions)
dns zone nameserver list List DNS zone nameservers (otcextensions)
dns zone router add Associate router with a private zone (otcextensions)
dns zone router remove Disassociate router with a private zone (otcextensions)
dns zone set Update a Zone (otcextensions)
dns zone show Show the zone details (otcextensions)
$ openstack --debug dns zone list
'ZoneController' object is not callable
Traceback (most recent call last):
File "/home/.virtualenvs/ansible-openstack/lib/python3.8/site-packages/cliff/app.py", line 400, in run_subcommand
result = cmd.run(parsed_args)
File "/home/.virtualenvs/ansible-openstack/lib/python3.8/site-packages/osc_lib/command/command.py", line 39, in run
return super(Command, self).run(parsed_args)
File "/home/.virtualenvs/ansible-openstack/lib/python3.8/site-packages/cliff/display.py", line 117, in run
column_names, data = self.take_action(parsed_args)
File "/home/.virtualenvs/ansible-openstack/lib/python3.8/site-packages/otcextensions/osclient/dns/v2/zone.py", line 68, in take_action
data = client.zones(**query)
TypeError: 'ZoneController' object is not callable
clean_up ListZone: 'ZoneController' object is not callable
END return value: 1
hmm, still no problems on my side, but I have now few potential problems:
please replace "openstackclient" with "python-openstackclient" in your requirements.txt file. "openstackclient" is installing lot's of additional dependencies (plugins) to OSC that nobody on OTC really needs.
the conflict might be caused by a clash of otce and python-designateclient, since they use same namespace/target (openstack.dns.v2). Even with it installed I do not have a problem, but this can explain that. Please try dropping it
additionally you might try/need to check, whether in your "/home/.virtualenvs/ansible-openstack/lib/python3.8/site-packages/" there are no entries for older versions. Every OSC plugin brings it's own "entry-points.txt" file which contains those commands. Might be in your case somehow designateclient takes precedence regarding the client file, but the commands, but the commands coming from OTCE are failing due to incompatibility. If i.e. there is some cache still laying from old version you might have problem
If dropping python-designateclient will help you we will try to establish different setups to try reproducing it on our side.
P.S. some of colleagues are also seeing this, trying to get details
In addition there is ~/.cache/python-entrypoints/ that is keeping the cache
Hi @gtema,
the conflict might be caused by a clash of otce and python-designateclient, since they use same namespace/target
(openstack.dns.v2). Even with it installed I do not have a problem, but this can explain that. Please try dropping it
thanks, the old python-designateclient package was the problem. After I changed the requirements to python-openstackclient all problems were gone because there is no python-designateclient anymore.
|
GITHUB_ARCHIVE
|
A subset of the Pre-PLCO Phase II Dataset from the SPORE/Early Detection Network/Prostate, Lung, Colon, and Ovarian Cancer Ovarian Validation Study. This data deals with epithelial ovarian cancer (EOC).
A data frame with 278 observations on the following 6 variables.
a factor with 3 levels of disease status, 1, 2, 3. The levels correspond to benign disease, early stage (I and II) and late stage (III and IV).
a binary vector containing the verification status. 1 or 0 indicates verified or non verified subject.
a copy of
D.full with the missing values.
NA values correspond to non verified subjects.
a numeric vector of biomarker CA125 (used as diagnostic test).
a numeric vector of biomarker CA153 (used as covariate).
a numeric vector containing the age of patients.
The Pre-PLCO datasets contain some demographic variables (Age, Race, ect.) and 59 markers measured by 4 sites (Harvard, FHCRC, MD Anderson, and Pittsburgh). Some interest biomarkers are: CA125, CA153, CA19–9, CA72–4, Kallikrein 6 (KLK6), HE4 and Chitinase (YKL40). The original data set consist of control groups and three classes of EOC: benign disease, early stage (I and II) and late stage (III and IV). In the sub data set, the biomakers CA125 and CA153 (measured at Harvard laboratories), the age of patients, and three classes of EOC are collected. In addition, the verification status and the missing disease status are also added.
The verification status V is generated by using the following selection process:
P(V = 1) = 0.05 + 0.35 I(CA125 > 0.87) + 0.25 I(CA153 > 0.3) + 0.35 I(Age > 45).
This process leads to 63.4% patients selected to undergo disease verification.
The missing disease status D are the copies of the full disease status D.full, but some values corresponding to V = 0 are deleted (refered as
SPORE/EDRN/PRE-PLCO Ovarian Phase II Validation Study: https://edrn-labcas.jpl.nasa.gov/labcas-ui/c/index.html?collection_id=Pre-PLCO_Phase_II_Dataset.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.
|
OPCFW_CODE
|
"RequestAuthentication" doesnt get applied to ingressgateway in istio 1.6.2
@itsmurugappan
"RequestAuthentication" doesnt get applied to ingressgateway in istio 1.6.2. Same policy works when applied to service pod with side car enabled.
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "jwt-example"
namespace: istio-system
spec:
selector:
matchLabels:
istio: ingressgateway
jwtRules:
issuer: "https://authserver.com:443/oauth2"
jwksUri: "http://authserver.com/oauth2/rest/security"
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "jwt-example"
namespace: default
spec:
selector:
matchLabels:
app: bookinfo
jwtRules:
issuer: "https://authserver.com:443/oauth2"
jwksUri: "http://authserver.com/oauth2/rest/security"
[ ] Security
Expected behavior
RequestAuthentication when applied to ingressgateway wont validate the jwt token.
Same apolicy when applied to service pod with side car works fine. validate jwt token.
Steps to reproduce the bug
Version (include the output of istioctl version --remote and kubectl version and helm version if you used Helm)
istio 1.6.2
How was Istio installed?
istioctl
Environment where bug was observed (cloud vendor, OS, etc)
RequestAuthentication needs to be accompanied by a AuthorizationPolicy. https://istio.io/latest/docs/reference/config/security/request_authentication/
RequestAuthentication needs to be accompanied by a AuthorizationPolicy. https://istio.io/latest/docs/reference/config/security/request_authentication/
@itsmurugappan itsmurugappan
thanks for the reply.
Heres the complete policy applied. authentication and authorization policy.
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: reqauthn-ingress-policy
namespace: istio-system
spec:
selector:
matchLabels:
istio: ingressgateway
jwtRules:
issuer: "https://oauthserverhost/oauth2"
jwksUri: http://oauthserverhost/oauth2/rest/security
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: reqauthz-ingress-policy
namespace: istio-system
spec:
selector:
matchLabels:
istio: ingressgateway
action: DENY
rules:
to:
operation:
methods: ["GET"]
hosts: ["example.com"]
paths: ["/productpage"]
when:
key: request.auth.claims[SSO_GROUPS]
notValues: ["GRP_SP_SF",":GRP_SP_SF",":GRP_SP_SF:","GRP_SP_SF:"]
Above policy works as expected in istio 1.5.8 when applied to ingress gateway.
Works as expected istio 1.6.2 when applied at applicaton pod level, however it shows deny for all users including one who has GRP_SP_SF in claim SSO_GROUPS
thanks for the report, so from what you're seeing it looks like the RequestAuthN is not working correctly on ingress gateway, I will update once I reproduced this.
thanks for the report, so from what you're seeing it looks like the RequestAuthN is not working correctly on ingress gateway, I will update once I reproduced this.
@yangminzhu
Yes, you are correct. thanks
thanks for the report, so from what you're seeing it looks like the RequestAuthN is not working correctly on ingress gateway, I will update once I reproduced this.
@yangminzhu
Is there an update on this?
Maybe it is related to bug 25578 ?
Sorry for the late update, @r-kotagudem , I tested this in 1.6.5 and it just works as expected for me (I used a different token and issuer but the similar policies)
Would you mind to try again? Also a note for the value "*:GRP_SP_SF:*", you may expect it to match a string that contains the :GRP_SP_SF: but actually it will not work as you expected (instead the behavior is undefined, it could be doing a suffix matching of :GRP_SP_SF:*). Currently we only support prefix, suffix or existence matching with the wildcard character (*).
Closing given the in above, we didn't see unexpected behavior.
Anyone runing into again feel free to comment and reopen.
My header would be like x-auth-request-groups : 'CN=EMSUDEMY,OU=Groups,O=cco.xxx.com, CN=all-xxxx-people,OU=xxxx Groups,O=cco.xxxx.com, CN=xxxxxxxxx,OU=xxxx Groups,O=cco.xxxx.com' now I would like match * all-xxxx-people*
|
GITHUB_ARCHIVE
|
The mechanism for picking guards in Tor suffers from security problems like guard fingerprinting and from performance issues. To address these issues, Hayes and Danezis proposed the use of guard sets, in which the Tor system groups all guards into sets, and each client picks one of these sets and uses its guards. Unfortunately, guard sets frequently need nodes added or they are broken up due to fluctuations in network bandwidth. In this paper, we first show that these breakups create opportunities for malicious guards to join many guard sets by merely tuning the bandwidth they make available to Tor, and this greatly increases the number of clients exposed to malicious guards. To address this problem, we propose a new method for forming guard sets based on Internet location. We construct a hierarchy that keeps clients and guards together more reliably and prevents guards from easily joining arbitrary guard sets. This approach also has the advantage of confining an attacker with access to limited locations on the Internet to a small number of guard sets. We simulate this guard set design using historical Tor data in the presence of both relay-level adversaries and networklevel adversaries, and we find that our approach is good at confining the adversary into few guard sets, thus limiting the impact of attacks.
A. Barton and M. Wright. Denasa: Destination-naive asawareness in anonymous communications. In Proceedings on Privacy Enhancing Technologies, 2016.
A. Biryukov, I. Pustogarov, and R.-P. Weinmann. Trawling for Tor hidden services: Detection, measurement, deanonymization. In Proceedings of the 2013 IEEE Symposium on Security and Privacy, May 2013.
X. Dimitropoulos, D. Krioukov, M. Fomenkov, B. Huffaker, Y. Hyun, and kc claffy. AS relationships: Inference and validation. In CCR, 2007.
R. Dingledine and G. Kadianakis. One fast guard for life (or 9 months.
M. Edman and P. F. Syverson. AS-awareness in Tor path selection. In E. Al-Shaer, S. Jha, and A. D. Keromytis, editors, Proceedings of the 2009 ACM Conference on Computer and Communications Security, CCS 2009, pages 380–389. ACM, November 2009.
T. Elahi, K. Bauer, M. AlSabah, R. Dingledine, and I. Goldberg. Changing of the guards: A framework for understanding and improving entry guard selection in tor. In Proceedings of the 2012 ACM Workshop on Privacy in the Electronic Society, WPES ’12, pages 43–54, New York, NY, USA, 2012. ACM.
N. S. Evans, R. Dingledine, and C. Grothoff. A practical congestion attack on Tor using long paths. In USENIX Security, 2009.
L. Gao. On inferring autonomous system relationships in the Internet. ACM/IEEE Transactions on Networks (TON), 9(6), 2001.
J. Hayes and G. Danezis. Guard sets for onion routing. In Proceedings on Privacy Enhancing Technologies, 2015.
N. Hopper, E. Y. Vasserman, and E. Chan-Tin. How much anonymity does network latency leak? ACM Transactions on Information and System Security, 13(2), February 2010.
R. Jansen, K. Bauer, N. Hopper, and R. Dingledine. Methodically modeling the Tor network. In Proceedings of the USENIX Workshop on Cyber Security Experimentation and Test (CSET 2012), August 2012.
R. Jansen, J. Geddes, C. Wacek, M. Sherr, and P. Syverson. Never been KIST: Tor’s congestion management blossoms with kernel-informed socket transport. In 23rd USENIX Security Symposium (USENIX Security 14), pages 127–142, San Diego, CA, Aug. 2014. USENIX Association.
R. Jansen and N. Hopper. Shadow: Running tor in a box for accurate and efficient experimentation. In Proceedings of the 19th Symposium on Network and Distributed System Security (NDSS). Internet Society, February 2012.
A. Johnson, R. Jansen, A. Jaggard, J. Feigenbaum, and P. Syverson. Avoiding the man on the wire: Improving tor’s security with trust-aware path selection. In 24th Symposium on Network and Distributed System Security (NDSS 2017).
A. Johnson, C. Wacek, R. Jansen, M. Sherr, and P. Syverson. Users get routed: Traffic correlation on tor by realistic adversaries. In Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security, CCS ’13, pages 337–348, New York, NY, USA, 2013. ACM.
Joshua Juen, Aaron Johnson, Anupam Das, Nikita Borisov, and Matthew Caesar. Defending tor from network adversaries: A case study of network path prediction. In Proceedings on Privacy Enhancing Technologies, 2015.
B. N. Levine, M. Reiter, C. Wang, and M. Wright. Timing analysis in low-latency mix systems. In Proc. Financial Cryptography (FC), Feb. 2004.
M. Luckie, B. Huffaker, k. claffy, A. Dhamdhere, and V. Giotsas. AS relationships, customer cones, and validation. In Internet Measurement Conference (IMC), pages 243–256, Oct 2013.
N. Mathewson and R. Dingledine. Practical traffic analysis: Extending and resisting statistical disclosure. In Proc. Privacy Enhancing Technologies workshop (PET), May 2004.
P. Mittal, A. Khurshid, J. Juen, M. Caesar, and N. Borisov. Stealthy traffic analysis of low-latency anonymous communication using throughput fingerprinting. In Proceedings of the 18th ACM conference on Computer and Communications Security (CCS 2011), October 2011.
S. J. Murdoch and G. Danezis. Low-cost traffic analysis of Tor. In Proceedings of the 2005 IEEE Symposium on Security and Privacy. IEEE CS, May 2005.
S. J. Murdoch and G. Danezis. Low-cost traffic analysis of Tor. In IEEE S&P, 2005.
L. Overlier and P. Syverson. Locating hidden servers. In IEEE S&P, 2006.
J. Qiu and L. Gao. AS path inference by exploiting known AS paths. 2005.
Y. Sun, A. Edmundson, L. Vanbever, O. Li, J. Rexford, M. Chiang, and P. Mittal. RAPTOR: Routing attacks on privacy in Tor. In 24th USENIX Security Symposium (USENIX Security 15), pages 271–286, Washington, D.C., Aug. 2015. USENIX Association.
|
OPCFW_CODE
|
Fabulousnovel Fey Evolution Merchant webnovel – Chapter 434 handy precious -p1
Gallowsfiction Fey Evolution Merchant txt – Chapter 434 payment pointless share-p1
Novel–Fey Evolution Merchant–Fey Evolution Merchant
Chapter 434 nod doll
Although Red-colored Drinking water Blood Snake was very rare, few character qi industry experts could be happy to agreement them. Just after looking after the crooks to come to be much stronger, the surface with their scales would have a layer of sturdy blood vessels-clotting toxin.
The Mom of Bloodbath clearly sounded critical.
Following listening to that, an unusual imagined suddenly flashed through his intellect!
“If he sprinkles these types of a substantial amount of realgar triple, I’m afraid these three little Red Normal water Blood stream Snakes will expire.”
Lin Yuan frowned. The realgar would type in through the Green H2o Blood flow Snakes’ scales within their bloodstream. They had been feys that existed on blood vessels, as well as the combination with the realgar into their blood might simply have a suppressive influence on other snake feys, but it surely was deadly towards the Reddish colored H2o Bloodstream Snake.
It will not be dangerous to the licensed contractor, but it would modify the other contracted feys. Therefore, the soul qi experts who contracted the Reddish colored Drinking water Our blood Snake couldn’t arrangement other types of feys.
Lin Yuan hurriedly needed a peppermint leaf from the Diamonds fey storage containers box, made use of his finger to whisk out its veggie juice, and put it on Genius’ nostrils.
Ahead of time every morning from the Indigo Azure Sea Market…
“If he sprinkles such plenty of realgar three times, I’m hesitant these three younger Red Standard water Blood flow Snakes will die.”
The seller on the three bloodstream-green snake-kinds feys hurriedly acquired the gemstone pan in the aspect and sprinkled the discolored powder in it around the feys.
“This affects the heart qi pros who obtain, promote, and market within the Indigo Azure Water Market and permit them to not odour any stink.
Translator: Atlas Studios Editor: Atlas Studios
Upon ability to hear that, a strange imagined suddenly flashed through his head!
npc town-building game anime
When the Mom of Bloodbath needed to bust right through to Myth III, it undoubtedly would require a large amount of blood stream qi power.
Just after playing Listen’s clarification, Lin Yuan noticed the fact that Indigo Azure City’s authorities have been really qualified.
Liu Jie reported that has a sigh, “He actually employed realgar to spread on these three Reddish colored Liquid Blood stream Snakes. These kinds of feys like them that rely on bloodstream energy to live are most frightened of realgar.
The Mom of Bloodbath wasn’t moving to execute the program of looking after three bad husbands while doing so, proper!?
Lin Yuan checked from the audio course and saw three blood-reddish snake-group feys kept in an steel cage. These were furiously hitting the metal cage with regards to their figures.
After paying attention to Listen’s explanation, Lin Yuan felt that the Indigo Azure City’s representatives have been really skilled.
Naturally, this seller had not been a Creation Become an expert in and didn’t know much about these kinds of feys such as Reddish H2o Blood stream Snake.
If the Mommy of Bloodbath want to bust to Fairy tale III, it undoubtedly will need plenty of our blood qi vigor.
When Lin Yuan obtained previously offered Hu Quan the completely jade-textured divine materials, that very first batch had a long strip of bright white sandalwood that had been simply not big enough. Hu Quan could only do some chopsticks. Thus, this completely jade-textured bright white sandalwood have been put aside.
The vendor experienced probably just occured to trap them within the wild and was just over time for that Indigo Azure Sea Marketplace. Consequently, he offered for sale them for the Indigo Azure Sea Current market.
It might not be dangerous to the professional, but it surely would affect the other contracted feys. Hence, the mindset qi experts who contracted the Green H2o Bloodstream Snake couldn’t commitment other types of feys.
The Mother of Bloodbath clearly sounded urgent.
Lin Yuan didn’t respond to the Mother of Bloodbath’s interpretation as he noticed that. If these three Reddish Standard water Blood vessels Snakes obtained the hornless dragon bloodline, acquiring them has got to be great bargain.
The vendor had probably just happened to hook them during the outrageous and was only quickly for the Indigo Azure Seas Industry. Because of this, he sold them within the Indigo Azure Ocean Sector.
Lin Yuan was using a set of moon-bright mindset qi garments that checked quite easy, with just one or two delicate layouts showcasing its uniqueness.
On the other hand, when conducting so, even though tone was loud, the iron cage was not broken in any way.
Just after hearing Listen’s outline, Lin Yuan experienced the Indigo Azure City’s representatives have been really gifted.
Below a closer inspection, he located this moon-bright heart qi apparel strung with most moon-white-colored rice beads. These rice beads were coiled jointly to create five kinds of fortune forms.
“A massive amount realgar powder can vaguely damage the Reddish colored Water Bloodstream Snakes’ beginnings in a moment.
“Lin Yuan, these three Red Liquid Bloodstream Snakes have a faint track down of hornless dragon bloodline. Promptly assist me buy them!”
Lin Yuan didn’t react to the mom of Bloodbath’s meaning as he heard that. If these three Green Liquid Blood flow Snakes obtained the hornless dragon bloodline, getting them would be a good bargain.
The Mother of Bloodbath clearly sounded pressing.
|
OPCFW_CODE
|
Android distinguish between tap and double tap
I am using the onTouch method to catch a touch with ACTION_UP and GestureDetector to capture the double tap, my issue is a double tap results in a tap then a double tap, then a tap. Is there a way to have a double tap block a tap or something like that? I know logically what its doing is correct, so if you advise I find another way just comment, please dont down vote. Thanks!
It's hard to tell what you might be doing wrong from just a description. Please show your code.
its not a code issue, the way a tap works is a ACTION_DOWN, and a ACTION_UP, so on a double tap you get two action ups and downs, so its working correctly but I was wondering if there is another way to distinguish single and double taps that does not result in two taps and a double tap at the same time
I would suggest that you switch to the SimpleGestureListener and use the onDoubleTap() and onSingleTapConfirmed() methods.
Thats what I was looking for, ok I will look into that now! Thanks! I looked though the docs cant believe I didn't see that!
To be precise, it's the GestureDetector that does all the work, a SimpleGestureListener is just an implementation of GestureListener that always returns false.
To be more precise than what is said by britzl, the GestureDetector does the actual work of determining when something is a single tap, double tap, long press, etc. The SimpleGestureListener is just a "listener" the GestureDetector uses to indicate what it recognized. It implements OnGestureListener and OnDoubleTapListener just to always return false. Check out a snippet from the onTouchEvent(MotionEvent) in GestureDetector:
case MotionEvent.ACTION_DOWN:
if (mDoubleTapListener != null) {
boolean hadTapMessage = mHandler.hasMessages(TAP);
if (hadTapMessage) mHandler.removeMessages(TAP);
if ((mCurrentDownEvent != null) && (mPreviousUpEvent != null) && hadTapMessage &&
isConsideredDoubleTap(mCurrentDownEvent, mPreviousUpEvent, ev)) {
// This is a second tap
mIsDoubleTapping = true;
// Give a callback with the first tap of the double-tap
handled |= mDoubleTapListener.onDoubleTap(mCurrentDownEvent);
// Give a callback with down event of the double-tap
handled |= mDoubleTapListener.onDoubleTapEvent(ev);
} else {
// This is a first tap
mHandler.sendEmptyMessageDelayed(TAP, DOUBLE_TAP_TIMEOUT);
}
}
mDownFocusX = mLastFocusX = focusX;
mDownFocusY = mLastFocusY = focusY;
if (mCurrentDownEvent != null) {
mCurrentDownEvent.recycle();
}
mCurrentDownEvent = MotionEvent.obtain(ev);
mAlwaysInTapRegion = true;
mAlwaysInBiggerTapRegion = true;
mStillDown = true;
mInLongPress = false;
mDeferConfirmSingleTap = false;
if (mIsLongpressEnabled) {
mHandler.removeMessages(LONG_PRESS);
mHandler.sendEmptyMessageAtTime(LONG_PRESS, mCurrentDownEvent.getDownTime()
+ TAP_TIMEOUT + LONGPRESS_TIMEOUT);
}
mHandler.sendEmptyMessageAtTime(SHOW_PRESS, mCurrentDownEvent.getDownTime() + TAP_TIMEOUT);
handled |= mListener.onDown(ev);
break;
Then the desired result can be obtained by creating a GestureDetector with the appropriate listener:
final View.OnTouchListener touch_listener = new View.OnTouchListener() {
@Override public boolean onTouch(View view, MotionEvent event) {
return _gesture_detector.onTouchEvent(event);
}
private final GestureDetector _gesture_detector = new GestureDetector(getContext()
, new GestureDetector.SimpleOnGestureListener() {
@Override public boolean onSingleTapConfirmed(MotionEvent event) {
// TODO: implement single tap behavior
// NOTE: returning true indicates that the gesture was handled
return true;
}
@Override public boolean onDoubleTap(MotionEvent event) {
// TODO: implement double tap behavior
// NOTE: returning true indicates that the gesture was handled
return true;
}
});
};
And from there, this OnTouchListener can be set to the View that wants the behavior.
It works by using the default GestureHandler (which is a Handler):
private class GestureHandler extends Handler {
GestureHandler() {
super();
}
GestureHandler(Handler handler) {
super(handler.getLooper());
}
@Override
public void handleMessage(Message msg) {
switch (msg.what) {
case SHOW_PRESS:
mListener.onShowPress(mCurrentDownEvent);
break;
case LONG_PRESS:
dispatchLongPress();
break;
case TAP:
// If the user's finger is still down, do not count it as a tap
if (mDoubleTapListener != null) {
if (!mStillDown) {
mDoubleTapListener.onSingleTapConfirmed(mCurrentDownEvent);
} else {
mDeferConfirmSingleTap = true;
}
}
break;
default:
throw new RuntimeException("Unknown message " + msg); //never
}
}
}
Recall the line mHandler.sendEmptyMessageDelayed(TAP, DOUBLE_TAP_TIMEOUT); from the GestureDetector. It delays notification of the tap by the timeout period for a valid double tap gesture. And the line if (hadTapMessage) mHandler.removeMessages(TAP); from the GestureDetector removes that notification upon a valid double tap. The GestureHandler receives the tap notification after the delay and uses the callback to notify the GestureListener with mDoubleTapListener.onSingleTapConfirmed(mCurrentDownEvent);. GestureHandler will delay that callback to the MotionEvent.ACTION_UP (handled by the GestureDetector) in the event that the user's finger is down when the tap notification is received.
Besides what britzl suggests you may want to check your logic there for a second.
I don't think a double tap results in multiple, it simply results in 4 events, like you somewhat mention. While the Gesture libraries are (i think) the best choice, you should consider that:
Store the Motionevent's timestamp (it's one of its methods) on ACTION_UP , then compare it with the next action up. Provided with a timeout, you will know when it is a tap or a double tap.
That is what the gesture listeners do
|
STACK_EXCHANGE
|
How to Eject a Stuck CD or DVD from an iMac or Apple Macbook Computer - Comprehensive Tips and Tricks to Remove a Disc
I love my iMac. I was an instant fan when my PC gave me the fatal blue screen of death and required a complete operating system installation - AGAIN! For me, the Mac just works. Each and every time. It is as reliable as Old Faithful!
Well, except for the day I could not eject a DVD. I finished watching a movie, hit the Eject button and nothing happened. Weird, I thought. I rebooted the computer and tried it again, but it still failed. I ran to find a paper clip and stick it in the manual eject slot only to find out that there isn't one on the slot-loading iMacs. Urgh!
After some research, I found several ways to manually eject a stuck disk. Listed below is a comprehensive list options using simple techniques - and if those fail, I included a few videos on how folks were able to physically remove the CD.
Push the Eject Button - Odds are that if you are reading this, you have already tried this one. You can also hold down the eject button for a few seconds to see it that will work.
Many times people simply push the key just like any other key on the keyboard. That will work often, but try to hold it down for one to two seconds first.
Right Click the icon on the desktop and choose the Eject option from the menu.
Drag the Disk Icon on the screen to the trash can. Sometimes, this will do the trick.
Select the Disk Icon on the desktop, hold down the Command key and type the letter "E."
Instead of holding down the mouse button during reboot, hold down the eject key. This can work just as well!
Turn off the computer. As you are turning the computer back on, hold down the left mouse button. The computer should eject the CD/DVD early in the process.
From the Terminal (look at the picture to the right to see how to open it), type in:
drutil tray eject
And then hit "Return."
From Disk Utilities, highlight the CD/DVD drive and choose Eject at the top of the screen. It may or may not work if you select the disk instead of the drive itself.
This option works particularly well for disks that your computer doesn't recognize at all.
Click on the picture to the right to see how it works.
Use a software program called Disk Eject. It is free, but the author welcomes donations.
Use Tweezers or Cardboard
Well, if all of the previous options failed, you are probably stuck at physically removing the disk.
The next series of techniques require touching the disk itself and any time you do that, you run the risk of scraping it. Of course, it does you no good if the disk is stuck in the computer, right?
The Two-Credit Card Trick
Leverage Using a Screwdriver
Don't use non-standard disks
Apple Computer recommends that you do not use odd-shaped disks in a slot loading CD/DVD player. It it usually fine to use those disks in a tray-loader, but be wary of the slot-loads on most of the macintosh lineup.
|
OPCFW_CODE
|
Holly Boothroyd from Computing and Information Technology was named FEPS Placement Student of the Year. She tells us about her year at Microsoft.
Rewind to the start of second year and placement was the main thing on my mind. It would be my first “real” job, thus I had many questions on how to best present myself during the application process. I was a bit lost, but I did have a goal to drive this process: work as a Software Engineer at Microsoft.
Microsoft has always been my dream company. I am inspired and motivated by Microsoft’s altruism in all matters. They create impact globally and put mechanisms in place to provide growth opportunities for their community and employees. However, Microsoft was never going to be an easy place to join. International recognition meant thousands of people would be applying for only a handful of software engineering placement roles. My chances were small, but I knew I would never get the job if I never applied. To be as prepared as possible, I immediately went to the Employability and Careers centre to review my CV, application questions, and attended every careers event that covered telephone, video, and in-person/assessment centre interviews. I wanted to be equipped to take on any challenge that would lead me to my goal. Microsoft was the first company I applied for and was the last company I heard from.
In the meantime, I looked elsewhere. It is important to remember that you are interviewing the company just as much as they are interviewing you. They need to win you over and provide you with the opportunities to succeed in a method that suits you.
Like many developers, video games triggered my interest in programming. I love the idea of playing a game with my friends that I made. To pursue that dream, I researched different games companies in England and specifically around Guildford. There are quite a few! However, many did not offer placements or were not development offices. Despite this, I delivered a CV and personalised cover letter to each of the main game companies in Guildford and emailed those abroad. It did not amount to much, but the experience was worthwhile. Dreams do not come easy and each step is a step in the right direction.
I managed to progress to the final stages of my Microsoft application. The process included an initial CV and application question screening, a situational assessment, video interview, telephone interview, and finally an assessment centre. At each stage I was very thankful to have attended the careers service workshops. I felt more prepared which made me more relaxed and personable with the interviewers. Additionally, interviews with other companies gave me the experience of interviewing.
The day I had been waiting for
Then the day came. After my assessment centre, I received an email from Microsoft that said “Congratulations! You’ve been offered a position as a software engineer”. I was absolutely ecstatic! I couldn’t believe that my first job would be at my dream company as a software engineer.
When my start date finally came around, I was so nervous. I found out that I would be working on Paint 3D the day I started! The first couple days were lots of fun. We met the leaders from each team, demoed Paint 3D, and got to try out HoloLens, Microsoft’s holographic headset.
I was assigned a mentor to look out for me, help me when I needed it, and teach me how to be successful at Microsoft. He was also my manager who I had performance reviews and weekly one-to-one meetings with.
Over the next twelve months, I built up skills in new technologies and languages. I was given and volunteered for critical path and high-pressure tasks to establish myself not as an intern, but as a core member of the team. However, this is not to say that I did not experience a bit of imposter syndrome. It is only natural when you’re surrounded by the top developers with years of experience at Microsoft and in the broader industry. It became clear that I had a lot to learn. However, as the year went on, I became more independent and started to be the “owner” of different feature sets. I finished the year feeling confident in my abilities and proud of what I had achieved.
At the end of my placement, I interviewed and received an internship offer on the Xbox team at Microsoft HQ in Redmond, Washington in America for the summer. It was phenomenal, and I only had positive experiences.
So, what have I learnt on placement?
It’s all about a growth mindset. This is a Microsoft company pillar to success. It sounds cheesy but has real substance and changed the way I think about my work. I started my placement worried that I wouldn’t be able to perform to Microsoft’s standards. I didn’t want to ask too many questions to hide that I didn’t know something. This is obviously ridiculous because the point of being an intern is to learn. To embrace a growth mindset, I shifted the way I looked at asking questions. I realized without asking questions I wouldn’t learn the answers and continue to be confused. I needed to put this silly worry behind to learn. In doing so, I have learnt enormous amounts over the 15 months. The problems I face now aren’t as difficult and I have the confidence to keep pushing towards an answer knowing that I have proved to myself that I can do it. Ask questions, learn, grow, and you’re forever smarter.
Prioritisation and flexibility is key. While at university, I have a set number of assignments due on a specific date and time. While there were deadlines at work, there was a lot more flexibility about when things are due, but also fundamentally what work needed to be done. This means that I worked on something one day and had to put it to the side temporarily to do something more important the next. The ability to adapt to a changing schedule and prioritise multiple pieces of work to complete them all on-time and to a satisfactory quality is a skill I used daily.
Workplace preferences. I have shifted my workplace environment preference. I originally wanted an office thinking that it was a sign of success and that I would want my own space to work. At my first internship it was an open plan office that encouraged collaboration with different teams. At the internship with Xbox, the office was still fairly open plan, but there are offices around and we are separated into “pods” or large cubicles for sub-teams. It is a different experience that has given me the opportunity to compare different workplace environments.
Plenty of subject specific knowledge. Wowza! I learnt loads about programming and software engineering. I learnt three new languages and learnt advanced techniques of one I had learnt previously in university. On top of this, I gained much more experience with the development environment Visual Studio, learned how to develop a Universal Windows Platform (UWP) application, and applied a new engineering design pattern (MVVM). My manager and team were amazing teachers. They loved to help me when I needed it and push me to be a better developer. The knowledge I learnt was applicable for my daily work and will always be useful to build upon when technology inevitably changes. I got practical experience that will stay with my longer than if I had just read it in a textbook. I feel much more confident entering my final year and ready to tackle my final year project with this experience.
Reaffirmed my beliefs that you get out as much as you put in. Over the last 15 months, I worked hard to get involved in my workplace and the industry. My efforts to try new things, be involved, and stand out gave me the opportunity for some pretty cool experiences.
For those seeking placement opportunities, do not give up. You only need one company to send you that acceptance letter. Utilise the resources within your department and especially the careers service. If you are unsure as to what you would like to do, apply broadly and see what openings present themselves or speak to the careers service or your department for advice. Placement year is a great opportunity to explore careers and find out first-hand what you like and what you don’t, so when it comes to looking for that first graduate job, you know what to look for. There are many avenues to reaching your dream and a whole support system at the university to help you reach them.
|
OPCFW_CODE
|
If you plant deer resistant plants with plants that deer like, will the deer resistance work?
Is there a "bubble" created by plants that are deer resistant to protect deer prone plants from being attacked?
short answer, no, not unless there's 6 foot deer fence all the way round
6ft? Sorry @Bamboo but 6 ft isn't enough. If a deer wants in, it will jump that with little problem. The only true deer barriers are double walled fences where the deer can see both fences and is unable to negotiate a jump over both of them. They'll still sometimes try and get caught between the fences.
White tailed deer can jump eight feet for sure, that's true, but that assumes the fencing is upright see here (just for interest's sake) http://pss.uvm.edu/ppp/articles/deerfences.html
No. Deer resistance is based on taste, not smell. You are confusing mammal pests with insect pests. If you look at the odor-based deer repellents they are super strong, and sometimes don't work.
"Deer resistant" plants are plants the deer do not prefer, but they still still eat them if it's a harsh winter and there is no other food. We've had deer eat hosta and daffodils, which are supposed to be deer resistant.
Now thorny plants will be even more deer resistant.
The harshest time of the year is the spring before many shoots and berries are available. So that's when the deer are the hungriest.
If you value your plants, put hot pepper powder on them and replace after every rain. Mammals can taste the hot pepper juice.
But your best protection is a 6 foot high fence. But even then SOME occasional desperate/stupid deer have been known to jump those and get trapped in the garden. lol.
I completely agree with bulrush. I will add something else to try, though. I saw a video of a guy on youtube who put in stakes and ran a length of 12lb or 15lb test around his garden at chest high on a deer. He said it was light enough that they couldn't see it, but heavy enough for them to feel before it snapped. It "grabbed" their chest and freaked them out. He showed deer tracks all around the garden, but no damage in the garden. That's all he used. It would look better than deer fencing. I think it would be worth a try for you.
I have a field of grass around the yard with the largest population of deer in the area that the DNR actually has thinning hunts for. When I planted an apple tree the next morning the deer ate off the top.
Host is a preferred deer delectable on Long Island.
I have three solutions:
thick plants that physically keep out deer.
Thorny plants (maybe a climbing rose)
Enclosure (possibly with wood) or wire mesh (this could be small and enclose also a single plant).
And possibly keep some ground for them, so that they will stop where you care less.
|
STACK_EXCHANGE
|
This article needs additional citations for verification. (December 2010) (Learn how and when to remove this template message)
A software architect is a software expert who makes high-level design choices and dictates technical standards, including software coding standards, tools, and platforms. The leading expert is referred to as the chief architect.
The software architect concept began to take hold when object-oriented programming or OOP, was coming into more widespread use (in the late 1990s and early years of the 21st century). OOP allowed ever-larger and more complex applications to be built, which in turn required increased high-level application and system oversight.
The role of software architect generally has certain common traits:
Architects make high-level design choices much more often than low-level choices. In addition, the architect may sometimes dictate technical standards, including coding standards, tools, or platforms.
Software architects may also be engaged in the design of the architecture of the hardware environment, or may focus entirely on the design methodology of the code.
Architects can use various software architectural models that specialize in communicating architecture.
The enterprise architect handles the interaction between the business and IT sides of an organization and is principally involved with determining the AS-IS and TO-BE states from a business and IT process perspective. UnfortunatelyDubious - discuss. many organizations are bundling the software architect duties within the role of enterprise architecture. This is primarily done as an effort to "up-sell" the role of a software architect and/or to merge two disparate business-related disciplines to avoid overhead.
An application architect works with a single software application.
Other similar titles in use, but without consensus on their exact meaning, include:
- Solution architect, which may refer to a person directly involved in advancing a particular business solution needing interactions between multiple applications. May also refer to an application architect.
- System architect (singular), which is often used as a synonym for application architect. However, if one subscribes to Systems theory and the idea that an enterprise can be a system, then System Architect could also mean Enterprise Architect.
- Systems architect (plural), which is often used as a synonym for enterprise architect or solution architect.
The table below indicates many of the differences between various kinds of software architects:
|Architect type||Strategic thinking||System interactions||Communication||Design|
|enterprise architect||across projects||highly abstracted||across organization||minimal, high level|
|solutions architect||focused on solution||very detailed||multiple teams||detailed|
|application architect||component re-use, maintainability||centered on single application||single project||very detailed|
In the software industry, as the table above suggests, the various versions of architect do not always have the same goals.
- Systems architecture / systems architect
- Software architectural model
- Software architecture
- Hardware architecture / hardware architect
- Systems engineering / systems engineer
- Software engineering / software engineer
- Requirements analysis / requirements engineer
- Systems design
- Electrical engineering
- Electronics engineering
|
OPCFW_CODE
|
As most of you might know, when you update an application which is already in your ROM, the application is not really updated in ROM (As it is supposed to be Read Only Memory) but newly installed in user applications. This causes some people like me to manually move the updated APK from User to System applications or request "updated packages" to flash through recovery.
There are at least 2 reasons to that need:
- We do not have a lot of memory in our devices so why wasting space with duplicated APK.
- People using Samdroid's Apps2SD might want some applications to be in ROM instead of SD because it loads faster.
With this problem in mind, I decided to develop an application that would let you select the applications that should automatically be moved from user memory (/data/app/) to System Memory (/system/app/).
User interface description:
On top you have:
- Your Rom free space and estimated space after reboot (AR)
- Your User free space and estimated space after reboot (AR) (This is not accurate if you use Apps2SD)
After loading, the applications shows you the list of all your applications.
On the right side, you will see 2 lights and a toggle button.
- The first light indicates if the application is in system memory (green = exists, red = not exists)
- The second light indicates if the application is in user memory (green = exists, red = not exists, yellow = will be moved on next reboot).
- A toggle button that activates auto-move of the application.
If you click on an application icon, you will have some information about the application.
Select the application you wish to automatically move, save the configuration and enable Apps2ROM on the bottom.
Use your phone normally, next time you restart your phone, applications will be moved.
In some cases, mapping between system application and user application might not be done automatically. In that case, a popup will prompt you to map it by manually selecting a system APK. If you made a mistake selecting the wrong APK, you can unmap it through the application information popup window.
- You need to be root to enable the automatic app move.
- I tested it on my Spica on Android 2.2 with and without SamdroidTool's Apps2SD.
- Haven't tested it with Link2SD (might work or not, I'll be glad to have some feedback about it)
- Always have a look to the estimated ROM free space After Reboot (AR) and never let it drop too low as future application update might be bigger than current installed one. I don't know how the ROM behave when it has too few space!
- If for any reason you wish to uninstall the application, don't forget to disable it before! (bottom right toggle button).
Anyway, as always, BACKUP YOUR SYSTEM in case you accidentally override the wrong application (might only happen if you manually mapped the wrong APK).
Feel free to give me some feedback, I hope you enjoy it
Version 1.0.0 is now on Android Market
Download link: Download on Market
If updating from any version before 1.x.x, please READ THIS:
Before installing the final version, you have disable Apps2ROM and uninstal it as I changed the package name (Or you will have 2 versions of Apps2ROM in your system so it might not disable itself properly). Doing so, you will loose all your previous selection.
If you wish to backup your current Apps2ROM configuration, after disabling apps2rom but before uninstalling it do the following:
- copy /data/data/com.lrfrog.apps2rom/files/userSystemAppInfo.xml to your sdcard
- uninstall apps2rom
- install new version from market
- copy userSystemAppInfo.xml from your sdcard to /data/data/com.lrenault.tools.apps2rom/files/userSystemAppInfo.xml
- Launch new version and you should have your configuration back
- Enable Apps2ROM
If the directory /data/data/com.lrenault.tools.apps2rom/ does not exits, launch Apps2ROM and close it. The directory structure will be created.
Updated to version 1.2.3 on market
Change log for 1.2.3
- Fixed crash while exiting application.
- Optimized startup app loading.
- Added cleaning of Dalvik-cache duplicates after moving an application.
Change log for 1.2.0
- New "Move now" button
- New fully automatic system app detection
- Added version info for both user and system app
- Improved compatibility
- Improved error detection and management
Change log for 1.1.3
- Compatible with more devices
- Less "fake enabled"
People having problems with previous version should Disable, save and then re-enable after updating to newest version. Most problems should be fixed.
Change log for 1.1.2
- Added sort by selection
- Fixed Enable button not working on some devices (/system/etc/init.d/ does not always exists so it is created if not found)
Change log for 1.1.1
- Bug fix: on some devices Apps2ROM might not be enabled properly (should be rare).
- Added control: configuration must be saved at least once before enabling Apps2ROM
Change log for 1.1.0
- Added sort menu. (sorted by name, duplicates, system or user applications).
- Added save on exit option in preference menu.
- Added version number in application information popup.
- Default minimum free space in ROM set to 10MB.
- Added translations (French and Italian)
- Minor code change on Enable button.
Change log for 1.0.0:
- Added control to prevent user to fill all ROM memory
- New preference menu that lets you manually override the value of minimum free space to leave in ROM
- Better management of screen orientation change
- Fix bug on enable button: if you didn't granted Super user permissions it let you think that it's activated but it wasn't (can't be done without ROOT).
- Android (And Cyanogen6+ ?) "Move application to SD" detection (it will prompt you to manually move back application to internal memory if you want Apps2ROM to move them in your ROM).
Change log for 0.9.4:
- Fixed not moving selected applications when Apps2ROM is Enabled or remaining active when disabled
Change log for 0.9.3:
- Removed Busybox call
Change log for 0.9.2:
- Fixed some interface alignments
- Replaced Toggle buttons by Check boxes in application list (should be less confusing)
- Added Help menu with info and imaged tutorial
|
OPCFW_CODE
|
The IMesh Toolkit
Annotation enables users of subject gateways to make and read digital comments and guidance in connection with a gateway's original resources, or at one remove, the gateway's descriptions of those resources. By their nature, annotations do not have to be embedded alongside or stored with the base documents, a considerable advantage over their non-digital counterparts. Initial work concentrated on considering functionality in annotation applications across a variety of contexts including Virtual Learning Environments where a multiplicity of users in VLE's would have different uses and expectations of an annotation system. In response to a move towards co-operation with the RDN, work moved to considering how an annotation system might work in conjunction with an existing resource discovery service without requiring the latter to make any major changes in its operation in order to provide a Web-based annotation capability.
As a means to providing a design which might readily be accessed by developers working in a variety of programming languages, the UML (Unified Modelling Language) design approach was adopted and a set of use cases pertaining to a straight-forward set of functionality was produced. These cases were validated using a further UML method and subsequently developed into data which could be used by developers. These initial use cases provided the basis upon which work could be carried forward on considering the viability of an extension to functionality which could prove useful to gateway developers, namely automated annotation moderation.
The design of a basic annotation system is described in user view terms and in a detailed design. There is also an extension to the basic design.
The approach to the design is discussed.
The documents containing the UML use case and sequence diagrams and screen prototyping are also available.
The annotation design was followed by an implementation phase The software developed in this phase is available for download.
Annotation Article in Ariadne
Functionality in Digital Annotation: Imitating and supporting
real-world annotation looks at both pre-digital and digital concepts of annotation, with a view to how annotation tools might be used in the subject-gateway environment.
Some Annotation LinksAnnotator from Annotation Technology, USC Brain Project http://www-hbp.usc.edu/Projects/annotati.htm
Annotea Project and Annotations in Amaya http://www.w3.org/2001/Annotea/
Denoue, L, Vignollet, L. "An annotation tool for Web browsers and its applications to information retrieval" (2000) http://citeseer.nj.nec.com/denoue00annotation.html
Marshall, C. "Annotation: from paper books to the digital library" in Proceedings of the ACM Digital Libraries '97 Conference, Philadelphia, PA (July 23-26, 1997). http://www.csdl.tamu.edu/~marshall/dl97.pdf
Wilensky, R. "Digital Library Resources as a Basis for Collaborative Work" JASIS Volume 51, No. 3, February, 2000 Robert http://citeseer.nj.nec.com/wilensky00digital.html
|
OPCFW_CODE
|
Should text ever be focusable for accessibility? I'm specifically thinking about key-value pairs
Regarding making something like the following accessible, currently I have the following:
<div tabindex="0" aria-labelledby="key1" aria-describedby="value1">
<label id="key1">Current User:</label>
<span id="value1">BBRENNAN</span>
</div>
Is it necessary for this block to be focusable with tabindex="0"? Or can screen readers infer this relationship more naturally? I understand screen readers can usually find and read text, but it's not clear to me how to ensure that I convey the relationship between Current User and BBRENNAN.
EDIT:
I was just looking into definition lists, which seem closer to what I need. You could also perhaps argue that this is tabular data, and should use a table. Totally fine if these solutions are indeed the best practices, but one thing I like about aria-labelledby and aria-describedby is that focusing the outer div reads the whole thing nicely in NVDA. So the above would read as "Current User, BBRENNAN."
Definition lists for some reason read as "List with 2 items, current user."
Tables just let me use arrow keys to move the reader between cells, which also doesn't achieve what I describe above. Link for a 5 year old thread on this very topic: https://webaim.org/discussion/mail_thread?thread=7089
The general rule is only interactive elements should be tabbable. So unless your user list item is clickable, then you should remove the tabindex. Having too many things tabbable can make navigating your site unnecessarily difficult. Here's a guide that has some good recommendations for keyboard navigation:
https://webaim.org/techniques/keyboard/
Sighted mouse users are able to visually scan a web page and directly click on any item. Keyboard users must press the Tab key or other navigation keys to navigate through the interactive elements that precede the item the user wants to activate. Tabbing through lengthy navigation may be particularly demanding for users with motor disabilities.
Thanks! This makes sense to me, but I'm still left wondering about how to communicate this key-value relationship. Any suggestions for best practices there?
Hmm, not sure, but maybe check out this question here: https://stackoverflow.com/questions/29305534/proper-aria-attribute-for-field-value-pairs-outside-of-a-table
Web accessibility is about more than people with visual disabilities that have to rely on screen readers like NVDA.
Web accessibility should encompass all disabilities that affect access to the Web and there are a lot of possible disabilities – and having one disability does not necessarily mean a person has no other disabilities.
Sadly, screen readers do not always behave the same way (as this example on required input fields shows and as it is mentioned in the email list you linked) and some browsers work better with certain screen readers (here and here is more information about screen readers and what you should keep in mind).
Even though the definition list (<dl>) does not seem to work properly, using <label> is not the proper way to do this, as labels are intended for labeling input fields.
Text does not need to be focusable as frodo2975 already answered correctly "[t]he general rule is [that] only interactive elements should be tabbable". However, that does not necessarily mean it should be clickable – interactive can also mean scrollable, for example.
I think the best solution would be to simply use a generic element like a <div> or a <span>:
<div>Current user: <span class="current-user">BBRENNAN</span></div>
Separating the actual user name in its own <span> is not necessary but would allow you to easily identify the element using JavaScript or CSS, if needed.
However this really depends on the concrete use case: What is the intended use of this? Is it a hint for users so they know with which user they are currently logged in? Or something like a marker that tells users who is currently working on a task or something like that?
Update:
Regarding your comment that you want a reusable component for displaying "key value pairs, usually in a row" I would suggest using a <table>:
<table>
<tr>
<th>Current user:</th>
<td>BBRENNAN</td>
</tr>
</table>
It is the natural choice, as tables are intended to represent data.
To comment on the use-case, this is for a reusable component that displays some high level key value pairs, usually in a row. Most usages have no more than 2 or 3 such "data wells," which is part of why I want them to be highly discoverable.
okay, I thought it was just this one element. How about a <table>? It can have <th> elements describing labels like "Current user" and <td> elements for the data. If you are displaying data, tables are the natural choice, as representing data is what they're supposed to do.
|
STACK_EXCHANGE
|
Kernel-Log: New stable and developer kernel, Mesa 7.1 and X-Server 1.5 released.
The Linux stable series managers have released kernel versions 184.108.40.206, 220.127.116.11 and 18.104.22.168, bringing numerous fixes and improvements from their preceding versions in the 2.6.25 and 2.6.26 series. 22.214.171.124 was released to fix a problem only introduced in 126.96.36.199. Whether the new versions fix any security problems was not disclosed, but the releases did carry the note "Any users of the 2.6.25/2.6.26 kernel series should upgrade to this version.", addressing those who compile their own kernel, rather than those who receive their kernel from their Linux distributor.
Willy Tarreau has published 188.8.131.52, with minor fixes and improvements to the now quite old 2.4 series. In the email announcing the release, Tarreau happily gives more information that the 2.6 stable series manages, and refers to security fixes for CVE-2008-2826 and CVE-2008-3525. Tarreau also announced 2.4.37-rc1, the start of development on 2.4.37. In the announcement of this preliminary version, Tarreau details the new features, including improved storage drivers, allowing the 2.4 kernel to work with PATA and SATA controllers in some newer chipsets.
The development of 2.6.27 moves forward; currently still at 2.6.27-rc5, a sixth release candidate is expected soon. It has been suggested that 2.6.27 will be released in late September or early October, but Torvalds has not set a date; the current list of issues introduced since 2.6.26 still includes 33 unresolved problems.
In the middle of August, Torvalds had asked developers to concentrate on fixing bugs after the end of the merge window rather than sending in more patches for integration which could introduce more problems. Torvalds has highlighted this in recent weeks in his responses to other requests for patch integration (1,2,3) and reprimanded the respective managers. Some kernel developers queried the existence of a new policy; Torvalds finally explaining that the current policy had been in place for some time but in practice had not been strictly implemented. Torvalds explained in different mails(1,2) what he thought the policy should be.
After Mesa's developers published Mesa 7.1 at the end of August, X.org's developers were able to release X-Server 1.5; depending on your point of view, it's either six months or a year late. The new X server is a central part of X.org 7.4 which is expected in the next week.
X.org 7.4 drivers have also been updated by package administrators; the Intel driver is at version 2.4.2 and the X driver for VMWare guests is at version 10.16.5. Version 2.1.11 of the open source Nvidia driver, nv added support for some new NVidia GPUs, but contained a bug which was then fixed in version 2.1.12. The proprietary AMD Catalyst drivers, known as fglrx, are not compatible with X-Server 1.5, along with drivers for older Nvidia graphics cards with version numbers 71.86.xx and 96.43.xx. This problem was noticed in May by users of Fedora 9, which uses a preliminary version of X-Server 1.5.
Intel developer Keith Packard has taken on the role of release manager for X Server 1.6, after the long delays in completing X Server 1.5 was slowed by Mesa delays, and he is already planning to publish 1.6 in a few months. Present plans include the DRI2 infrastructure, RandR 1.3 and a revised input framework according to a report on Phoronix about the X Developer Summit 2008. X.org 7.5 would arrive after the next major X server revision.
- Independently of X.org, Nvidia released a beta version 177.70 of their proprietary Nvidia graphics driver for x86-32 and x86-64 systems.
- Jeremy Fitzhardinge presented patches for review to the Linux Kernel Mailing List that are the basis for Linux operating as a privileged Xen domain (Dom0).
- 17 drivers for infrared devices from Lirc that have lived outside the main kernel branch have been submitted for inclusion in the main branch by Red Hat developer Jarod Wilson.[/li
- Dave Arlie proposed a new model for the DRM development process to make the developers work easier and optimise the flow of patches between DRM development and the main development branch.
- After Git 1.6.0 removed commands in the form of "git-foo" which upset some users, Git developers are holding a user survey to obtain feedback.
- Harold Welte published a mini FAQ on the recently released open source VIA graphics drivers. It explains that VIA with to work with existing developer communities in incorporating this code and code from existing open source VIA drivers, Chrome and Unichrome, in future X.org releases.
Further background and information about developments in the Linux kernel and its environment can also be found in previous issues of the kernel log at heise open:
- Kernel Log: New stable and pre-release kernels, Ubuntu 8.10 with 2.6.27?
- Kernel Log: New video drivers for AMD, Intel, Nvidia and VIA hardware
- Kernel Log: Kernel development explained, new Synaptics driver, Linux 2.6.27-rc3 published
- Kernel Log: Ath9k driver for Atheros Wifi in 2.6.27; reading material and videos for Linux experts
- Kernel Log: Btrfs 0.16 released, new stable kernels released, Wifi drivers for 2.6.27 merged
- Kernel Log: New Stable kernel, DRI2 postponed, Xgl removed from X.org
|
OPCFW_CODE
|
Difficult to debug extractors
I know a little bit of Python, so I decided to try to debug the bug I just filed, issue #6699. I cloned the repo and did the following in a terminal (which, by the way, was very non-obvious, because doing what CONTRIBUTING.md suggested doing (python -m youtube_dl) loaded my distro's out-of-date module, not the one in the current directory):
$ python
>>> from youtube_dl.extractor.youtube import YoutubeUserIE
>>> e = YoutubeUserIE()
>>> e.extract("https://www.youtube.com/user/rhettandlink2")
The output I get is this:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "youtube_dl/extractor/common.py", line 287, in extract
return self._real_extract(url)
File "youtube_dl/extractor/youtube.py", line 1617, in _real_extract
'Downloading channel page', fatal=False)
File "youtube_dl/extractor/common.py", line 438, in _download_webpage
res = self._download_webpage_handle(url_or_request, video_id, note, errnote, fatal, encoding=encoding)
File "youtube_dl/extractor/common.py", line 345, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal)
File "youtube_dl/extractor/common.py", line 324, in _request_webpage
self.to_screen('%s: %s' % (video_id, note))
File "youtube_dl/extractor/common.py", line 495, in to_screen
self._downloader.to_screen('[%s] %s' % (self.IE_NAME, msg))
AttributeError: 'NoneType' object has no attribute 'to_screen'
This doesn't seem to make sense, because the docstring says this:
>>> help(YoutubeUserIE)
...
| extract(self, url)
| Extracts URL information and returns it in list of dicts.
It would seem that the only thing this method should do is to return a list. But instead, it also generates output to the screen, and fails if not run from...the executable script, I guess?
So, since I had run youtube-dl with --dump-pages earlier, I loaded one of the pages (which was a JSON playlist segment) into Python and tried to extract directly from the file:
>>> with open('/tmp/yt/test') as f:
>>> testpage = f.read()
>>> e.extract_videos_from_page(testpage)
[]
This makes no sense, because, having looked at YoutubeChannelIE.extract_videos_from_page, it looks like it should parse out the videos from testpage, which looks like this:
>>> testpage
'{"content_html": " \\n\\n\\n\\u003ctr class=\\"pl-video yt-uix-tile \\" data-set-video-id=\\"\\" data-title=\\"Awkward Elevator Situation (Wheel of Mythicality - Ep. 30)\\" data-video-id=\\"2DpaTtjq1II\\"\\u003e\\u003ctd class=\\"pl-video-handle \\"\\u003e\\u003c\\/td\\u003e\\u003ctd class=...
Now I see that there are double-escaped quotes in there, which will mess with the regexp in YoutubeChannelIE.extract_videos_from_page. So I try to follow the chain of functions that download pages and parse them and decode them to find out how the corrected HTML gets to extract_videos_from_page()...but I am lost in a maze of functions calling functions calling functions, from one file to another, across directories...
I will try to summarize:
The instructions in CONTRIBUTING.md are not helpful for trying to debug extractors from a current, cloned repo.
The aforementioned extract() method should do only what it says (return a list), not also output to the screen, which fails if not correctly initialized (for which there is no documentation).
Maybe I'm just a noob, but after about the 5th link in a method-that-calls-another-method-in-another-file-in-another-directory chain, I get lost. All I want to do is import the appropriate module that contains the appropriate extractor, pass it a) URL, or b) some raw HTML, and see what the result is so I can figure out why its regexp isn't working. This seems like it's harder than it should be.
If these issues could be addressed, I would imagine that more people would be able to contribute by fixing the inevitable broken extractors that happen when sites change.
Thanks for any help and for making youtube-dl. I don't mean this to be rude or harsh criticism; I'm just trying to document how I tried to debug it and got stuck so that perhaps the process can be improved.
python -m youtube_dl works perfectly for me, make sure you are running it from the correct directory.
It's not too clear, but extractors only work if they have the .downloader correctly set (via set_downloader or the initialization). So if you really want to use directly the extractor you have to do something like:
from youtube_dl import YoutubeDL
from youtube_dl.extractor import YoutubeUserIE
ydl = YoutubeDL()
ie = YoutubeUserIE(ydl)
info = ie.extract("https://www.youtube.com/user/rhettandlink2")
But in general you shouldn't use the extractors directly:
from youtube_dl import YoutubeDL
ydl = YoutubeDL()
# this resolves redirects and extracts info from playlist import entries
info = ydl.extract_info("https://www.youtube.com/user/rhettandlink2", download=False)
Instead of writing python, the method I use (and probably other developers do the same) is to call the program with the correct parameters (python -m youtube_dl URL OTHER_ARGS) and if necessary I put some print(...) call in the extractors to debug them.
About the problem with all the function calls, most functions do a relatively simple thing (download a webpage, extract some value with a regex ...) which simplifies the process and others are used to reduce the complexity of some extractors (in the case of extract_videos_from_page it's called in two of the possible branches of the extraction). I don't think there's a better alternative.
Thanks for your kind answer. That helps me understand it a lot better. I'll see what I can do.
If I may, I suggest that some of this info be added to the CONTRIBUTING.md file. :)
Thanks to your help here, I was able to fix the bug!
|
GITHUB_ARCHIVE
|
Implement cross-server (federated) merge requests
By this, I mean, that one could clone a repository managed by GitLab, make it publicly available somewhere and then manually file a merge request for a particular branch on the forked repository.
This would play along awesomely with git's distributed nature, since it would allow collaboration across different GitLab servers or even to other services such as GitHub. You could e.g. fork a repository hosted on a company GitLab server to GitLab Cloud and still do merge requests.
We like the idea of Edward Bopp: "I think, the first step is to allow users to create merge requests from repositories with custom URIs. This would not require a lot of work in terms of creating a protocol and dealing with spam issues, since one does not really have to deal with the “other” server except for pulling from it." and are accepting merge requests for this.
Hi Eduard, thank you for proposing this. I've edited your suggestion to replace GitHub with GitLab Cloud to make it easier to achieve. Ideally this would depend on an open protocol that everyone including BitBucket and GitHub could adopt. We look forward to a world of federated git hosting.
Personally (Sytse) I think that it would be great if this protocol worked without having to configure the servers explicitly. It might be needed to have a user account on both servers so that SPAM can be prevented.
I think, the first step is to allow users to create merge requests from repositories with custom URIs. This would not require a lot of work in terms of creating a protocol and dealing with spam issues, since one does not really have to deal with the "other" server except for pulling from it.
A next step could be to implement automated forking to other servers. This is where one has to think about protocols and accounts.
Actually I would be interested in trying to do this myself but I have zero experience with both Ruby and the GitLab codebase. Let's see if I can change that…
Eduard, I like your idea of starting small by creating merge requests coming from repositories that are on a different GitLab server. Maybe an even smaller step is allowing you to fork projects from a different Gitlab server, but that can also be added later. Good luck with the implementation.
People https://twitter.com/Carols10cents/status/459753771807932416 suggested using OStatus http://status.net/wiki/OStatus for this.
The git way of doing a cross-server merge request is emailing the output of request-pull http://git-scm.com/docs/git-request-pull
Should we use some kind of DHT for self-discovery of repositories?
Mitar, for now we want to keep it simple with cross-server merge requests. A peer-to-peer GitLab with distributed hash table would be another feature request.
I agree that users should be allowed to submit a merge request from an arbitrary git url and branch. Git is meant to be distributed, it should not be bounded by a centralized service (such as GitHub, and the current GitLab). git-request-pull is great, but integrated with GitLab is even greater!
Well, the minimal solution would be a parser for the git-request-pull(1), which would check every comment and if recognized the comment as the pull request, it would add button for the owner of the repo (e.g., [Create Pull Request]). After pressing that (so it is constantly under the control of the owner repo), gitlab would setup new remote (if doesn't exist still), fetch it, and create a merge request.
Unfortunately, I cannot code Ruby to save my life (Pythonista here), so I will have to leave it to somebody else
Federation is SO important; one of the biggest problems with the modern Internet is that the web has become totally centralized; federation allows us to really be peer-to-peer, the way the Internet was built and the way Tim Berners-Lee envisioned the web. Bring the power back to individuals! As it stands now, people use GH because "everyone is on it." If GitLab supported federation, then everyone on a (public) GitLab instance would be "on it" as well.
GH's killer feature, facebook's killer feature? "Everyone's on it." Seriously, it's that important.
Re: spam, might be worth seeing how Google Wave & XMPP in general deal with this & federation. GNU Social (managed via GitLab, in fact) also supports federation; I suspect diaspora does too. There's a plethora of open source stuff out there to borrow the model from, and probably even some code. I haven't seen any ruby apps already using federation in my quick search, but here's a supporting library for the WS-Federation protocol: https://github.com/kbeckman/omniauth-wsfed
Or maybe it's as "simple" as building from an OAuth side or an OpenID side, (or both) depending on what the architechture should be like.
I really thinkl we should be thinking about this like XMPP.
The comment about OStatus is interesting; Laconica->Status.NET->GNU Social is one of the avenues of design that might be worth looking at, to me.
Erik van Zijst
Is there anyone who has a concrete plan for this and is ready/available to work on it?
In particular, some folks on this thread talk about the "simple" approach where one can input the URL to a remote repo when creating PRs. What would the full workflow here look like?
Federation PRs would be great if they worked across different vendors (e.g. GitHub <--> GitLab), but you could start by just federating between different independent GitLab servers. What is the current status around this idea?
Full workflow with the simple approach would be:
- You have an account on the destination web server
- Go to https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/new
- In the first input field of the source branch field put in the url of a public git repo
- Click compare branches
Erik van Zijst
Some of the questions that I have around this proposal is how it would deal with permissioning. Would this only be supported for source repos that are public?
What would the PR page show? In order to show a diff, the dest server would have to go and fetch from the source. This can be very expensive and so it should anticipate not being able to immediately show anything on the create-pr page.
Would the dest server periodically re-fetch from the source to stay up to date?
I'm not too familiar with GitLab's PR workflow, but on GitHub and Bitbucket PRs are created from within the source repo. This proposal reverses that.
Would the fetched branch actually be brought into the dest repo? What if it clashes with an existing branch (e.g. pull "develop" from source, while the dest already has a "develop"), or would it create a temporary ref under something other than refs/heads?
This proposal would only be for public repo's.
I agree the fetch from the source would be slow.
Refetching periodically is not doable I agree.
Maybe we should explore a push alternative (instead of the pull we discussed above).
- Click create PR/MR
- Select a repo on another server (a server that you have OAuth access to)
- Source server posts a JSON file to destination server
- User is redirected to destination server and finishes the PR/MR
- Source server reports the JSON file to the destination server on updates
This page may contain information related to upcoming products, features and functionality. It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes. Just like with all projects, the items mentioned on the page are subject to change or delay, and the development, release, and timing of any products, features, or functionality remain at the sole discretion of GitLab Inc.
|
OPCFW_CODE
|
In database terminology, a cell is a part of a table where a row and column intersect. A cell is designed to hold a specified portion of the data within a record. A cell is sometimes referred to as a field (although a column is also often referred to as a field).
A table row is made up of one or more cells running next to each other horizontally. A column is made up of one or more cells running below each other vertically.
A database table is a structure that organises data into rows and columns – forming a grid.
Tables are similar to a worksheets in spreadsheet applications. The rows run horizontally and represent each record. The columns run vertically and represent a specific field. The rows and columns intersect, forming a grid. The intersection of the rows and columns defines each cell in the table.
A database is a collection of data, stored in a logical and structured manner.
The way in which data is organised, allows for efficient retrieval of the data. Data can be viewed, inserted, updated, and deleted as required.
Most modern databases are built with database software such as Microsoft Access, SQL Server, MySQL, etc. But strictly speaking, a database could be a simple as an Excel spreadsheet or even a text file.
In the world of databases, a view is a query that’s stored on a database.
The term can also be used to refer to the result set of a stored query.
To create a view, you write a query, then save it as a view.
To run a view, you query it, just like you’d query a table. The difference is that, the view itself is a query. So when you query the view, you’re effectively querying a query. This enables you to save complex queries as views, then run simple queries against those views.
In relational database design, a relationship is where two or more tables are linked together because they contain related data. This enables users to run queries for related data across multiple tables.
Relationships are a key element in relational database design.
Here’s an example:
In the above example, the City table has a relationship with the Customer table. Each customer is assigned a city. This is done by using a CityId field in the Customer table that matches a CityId in the City table.
While it’s certainly possible to store the full city name in the Customer table, it’s better to have a separate table that stores the city details. You can easily use a query to look up the CityName by using the CityId that’s stored for that customer.
A foreign key is a field that is linked to another table‘s primary key field in a relationship between two tables.
In relational database management systems, a relationship defines a relationship between two or more tables. That is, the data in one table is related to the data in the other. One table contains the primary key and the other table contains the foreign key.
When we establish a relationship between the tables, we link the foreign key with the primary key. From that point on, any value in the foreign key field should match a value from the primary key field in the other table.
|
OPCFW_CODE
|
/*
Write a function called doubleOddNumbers which accepts an array and returns a new array with all of the odd numbers doubled
(HINT - you can use map and fitler to double and then filter the odd numbers).
Examples:
doubleOddNumbers([1,2,3,4,5]) // [2,6,10]
doubleOddNumbers([4,4,4,4,4]) // []
*/
const doubleOddNumbers = arr => {
return arr.filter((el) => el % 2 !== 0).map(el => el * 2);
}
console.log(doubleOddNumbers([1,2,3,4,5]));
console.log(doubleOddNumbers([4,4,4,4,4]));
console.log(doubleOddNumbers([1,2,3,4,5,6,7,8,9,10]));
|
STACK_EDU
|
HDDS-11162. Improve Disk Usage page UI
What changes were proposed in this pull request?
Improve Disk Usage page UI.
Please describe your PR in detail:
This PR adds a new DU Page with improved layout and design.
The new page would help to eventually improve the overall look and feel for Recon
Currently the page is disabled to the user so as to not affect any existing functionality
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-11162
How was this patch tested?
Patch was tested manually
Pls explain the above behavior, everything went grayed out when we clicked on legends.
Yes, this is expected behaviour.
Currently the legends also acts as a filter to select/deselect the paths that are shown as a part of the pie-chart.
So if you click on a legend item, it will be removed/added to the pie-chart.
https://github.com/user-attachments/assets/de333c3e-f210-4616-adc9-ca642bf38167
If we deselect all of the paths then it will show a grey chart, which will again be enabled after paths are selected.
https://github.com/user-attachments/assets/f9494445-38df-4f6c-81ba-0c4b304e833c
Thanks for asking this @devmadhuu.
Yes, this is expected behaviour. Currently the legends also acts as a filter to select/deselect the paths that are shown as a part of the pie-chart. So if you click on a legend item, it will be removed/added to the pie-chart.
Screen.Recording.2024-09-24.at.14.39.47.mp4
If we deselect all of the paths then it will show a grey chart, which will again be enabled after paths are selected.
Screen.Recording.2024-09-24.at.14.42.03.mp4
Thanks @devabhishekpal for providing the detailed video on how legend filters are working, however IMO, we should change the size values updated based on filter and entity what user has selected. But currently even if we change or select legend for filtering, it is not changing in your video as well. Pls let me know your opinion about it.
Attaching screenshot for your reference.
Hi @devmadhuu, yup just verified your scenario. As a part of the latest commit a342780 I have fixed this issue. Now it will adjust the occupied space on the basis of the entity select.
i.e say total space is 500GB
We are having the following tree:
/vol1 -> 128GB
|-> buck 1 -> 20GB
|-> buck2 -> 108GB
/s3v -> 200GB
|-> buck1 -> 200GB
|-> buck2 -> 0GB
On Volume level it will show 328 GB / 500 GB
On /vol1/buck1 level it will show 20 GB / 500 GB and so on depending on the selected entity
O still don't see the issue
Hi @devmadhuu, yup just verified your scenario. As a part of the latest commit a342780 I have fixed this issue. Now it will adjust the occupied space on the basis of the entity select.
i.e say total space is 500GB
We are having the following tree:
/vol1 -> 128GB
|-> buck 1 -> 20GB
|-> buck2 -> 108GB
/s3v -> 200GB
|-> buck1 -> 200GB
|-> buck2 -> 0GB
On Volume level it will show 328 GB / 500 GB On /vol1/buck1 level it will show 20 GB / 500 GB and so on depending on the selected entity
I pulled latest code, but still don't see the issue resolved. Pls see below screenshot:
Latest changes address this issue.
Attached screen recording of the same.
https://github.com/user-attachments/assets/0062a19c-0767-499b-8ef3-f4da07db4532
Thanks for updating the patch @devabhishekpal
After picking up the latest changes it seems as we drill down deeper into the namespace the /summary and /quota endpoint seems to be not called.
Even though I am at the bucket level, the Entity Type is shown to be at ROOT level.
After doing inspect element I see that the /summary and the /quota endpoints are not called at all after the ROOT level.
Thanks for the changes @devabhishekpal a few more minor comments and I think we should be good to go :-
The things that I mention here, are also present in the old UI and we should have fixed it there :-
For bucket level can we convert the Used Bytes to Kb, MB, GB ...
For entity type KEY I believe we could show the creation and modification time also.
Thanks for the patch @devabhishekpal thanks for the review @devmadhuu.
@devabhishekpal I forgot to mention this HDDS-11495. This I found, pls check if this was resolved.
|
GITHUB_ARCHIVE
|