Thursday, December 29, 2016

Updated Favorite Subdomains List

This is a small addition due to the reworking of the SSL strategy I am using on my projects-- with the advent of StartSSL no longer being a choice for me due to some poor unrelated decisions by the staff. --That being said, I had a good experience with them before the kerfluffle and wish them well as we part ways now that they no longer fulfill the needs I have from a cert issuer.

On the github -- https://github.com/bradchesney79/subdomains

I added a few more to the list:

admin
alpha
app
api
beta
blog
css
dev
feed
files
forum
ftp
help
image
images
imap
img
info
js
lists
live
m
mail
media
mobile
mysql
news
photos
pic
pop
search
secure
smtp
static
status
store
support
test
videos
vpn
webmail
wiki
www

as a comma separated value (CSV) list:

admin,alpha,app,api,beta,blog,css,dev,feed,files,forum,ftp,help,image,images,imap,img,info,js,lists,live,m,mail,media,mobile,mysql,news,photos,pic,pop,search,secure,smtp,static,status,store,support,test,videos,vpn,webmail,wiki,www

bash script ready:

#!/bin/bash

arraytest[0]='test' || (echo 'Failure: arrays not supported in this version of bash.' && exit 2)

unset arraytest[0]

SUBDOMAINS=(
  admin
  alpha
  app
  api
  beta
  blog
  css
  dev
  feed
  files
  forum
  ftp
  help
  image
  images
  imap
  img
  info
  js
  lists
  live
  m
  mail
  media
  mobile
  mysql
  news
  photos
  pic
  pop
  search
  secure
  smtp
  static
  status
  store
  support
  test
  videos
  vpn
  webmail
  wiki
  www
)

for i in ${SUBDOMAINS[@]}
  do
    printf "$i\n"
done

exit 0

Friday, December 9, 2016

I am worried that I do not like leading high speed software teams

I take away from this uncomfortable situation that I need to start thinking about how to channel all this 'productivity'. I hope to come up with a process that ensures forward progress from all this 'productivity' so that days after this eye opening experience I can guide everyones experience better.

I have experienced my first complete break down of working with people not like myself. It is disheartening to give up nights and weekends-- my own aspirations and extra time with the people I love most. I have chosen to make a sacrifice in the short term by investing in a project. I've given up some things dear to me for duty to an obligation. The motivation for accepting was thatI believe the investment will yield a comfortable schedule, much autonomy in my private and professional life, and a rewarding end result that makes the world a better place to live in because that matters to me.

Today I had to take time to be a better person than I was when I woke up. I am angry that people did not listen to my request to better themselves and know the subject material when directly asked to study. The pseudo code I received in the morning as part of a question made it clear that no effort was made to acquaint themselves with the concepts of this task.
 
Remember the movie The Matrix? I felt like the spoon kid in the Oracle's apartment. The whole time I'm thinking to myself, "Login, *creepy stare and mystical hand movements*, there is no login..."

The following is the resulting response e-mail regarding a 'login issue' that is actually an authentication & access issue:

This is a conceptual change in thinking. We keep saying login, but in reality this is merely a tool to persistently accept something given that we trust. (They get the trusted thing by supplying credentials but never actually 'log in'. They get the token and we forget about them essentially until they come back asking for something with the token.

The token is a tool to verify authentication on an ongoing basis. There is no 'state' preserved during the life of the token.

It is okay to send as a header item as we are because SSL does encrypt it there-- we have maintained the user's privacy.
By sending the token in a cookie as 'httpOnly' and 'secure', client side javascript has no access through common channels-- we have matched the security level of a session cookie.
What makes us trust them is that we have baked in a secret that cannot be distilled from the whole. Since we made it and we know the secret, we can compare what we received to a newly generated one (just like with comparing a password hash to the hash created with a submitted password) -- we can trust a token because we can verify that it has not been tampered with.

As a bonus we can include very basic arbitrary data like a user id, their name in text, and any organizations they belong to (eventually themeing colors)-- this saves us some database lookup overhead.
However, we must be careful with what data we decide to put in the token as this increases the size of every request of a protected resource received and many responses.
We are using PHP sessions with AWS Redis flavored elasticache. At this time we are using it as a cache to lighten the load on the database-- not as any part of authentication or 'login'. We need to shift our thinking from the idea of 'logged in' as an active user and more to the idea that the person is authenticated and has access.
That changes the question from "is this person logged in" to "does this person have a valid thing we gave them"

Please adjust your code to match these concepts. If I were doing it, this is a method I might write.
validateRequest() {
if (isset(cookie('token')) && tokenIsValid()) {
  if(!associatedCache()) {
    get userObject by userId from token
    store generatedUserObject
    get orgObjects by associatedOrg[] from userObject
    store generatedOrgObject[]
  }
  return '{"validToken":"true"'};
}
else {

  //regardless of what is wrong with the token they need a new one
  return '{"requestedResource":"' . __FILE__ . '","validToken":"false"}';
}


Since we are not using routing __FILE__ is meaningful data.


*You might see some fibbing or incomplete things -- no logging, or cache invalidation, and a session token is more secure if we don't add any of our own arbitrary data in the second block of the JWT. That the info is available at all makes it 'less secure'. I've balanced the security against the convenience-- this is where the chips lay, I have some data in my JWT.

Friday, October 7, 2016

sources.list Generators, Yaaay!

This stuff is important, so definitely pay attention to where you are sourcing the parts of your system. However, this stuff saves a lot of time when dealing with stuff that isn't often played with. As always this post is more for me than you, but if you can benefit from it, good.

For Debian:

https://debgen.simplylinux.ch/index.php

 And what I usually want for Debian Jessie is...

#------------------------------------------------------------------------------#
#                   OFFICIAL DEBIAN REPOS                   
#------------------------------------------------------------------------------#

###### Debian Main Repos
deb http://ftp.us.debian.org/debian/ jessie main contrib non-free

###### Debian Update Repos
deb http://security.debian.org/ jessie/updates main contrib non-free

#------------------------------------------------------------------------------#
#                      UNOFFICIAL  REPOS                       #
#------------------------------------------------------------------------------#

###### 3rd Party Binary Repos

#### Dotdeb - http://www.dotdeb.org
## Run this command: wget -q -O - http://www.dotdeb.org/dotdeb.gpg | apt-key add -
deb http://packages.dotdeb.org jessie all



For Ubuntu/Kubuntu:

https://repogen.simplylinux.ch/

And similarly as above...

#------------------------------------------------------------------------------#
#                            OFFICIAL UBUNTU REPOS                             #
#------------------------------------------------------------------------------#


###### Ubuntu Main Repos
deb http://us.archive.ubuntu.com/ubuntu/ xenial main restricted universe multiverse

###### Ubuntu Update Repos
deb http://us.archive.ubuntu.com/ubuntu/ xenial-security main restricted universe multiverse
deb http://us.archive.ubuntu.com/ubuntu/ xenial-updates main restricted universe multiverse

###### Ubuntu Partner Repo
deb http://archive.canonical.com/ubuntu xenial partner

#------------------------------------------------------------------------------#
#                           UNOFFICIAL UBUNTU REPOS                            #
#------------------------------------------------------------------------------#


###### 3rd Party Binary Repos

#### Gimp PPA - https://launchpad.net/~otto-kesselgulasch/+archive/gimp
## Run this command: sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 614C4B38
deb http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu xenial main

#### Google Chrome Browser - http://www.google.com/linuxrepositories/
## Run this command: wget -q https://dl.google.com/linux/linux_signing_key.pub -O- | sudo apt-key add -
deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main

#### Opera - http://www.opera.com/
## Run this command: sudo wget -O - http://deb.opera.com/archive.key | sudo apt-key add -
deb http://deb.opera.com/opera/ stable non-free

#### Oracle Java (JDK) Installer PPA - http://www.webupd8.org/2012/01/install-oracle-java-jdk-7-in-ubuntu-via.html
## Run this command: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys EEA14886
deb http://ppa.launchpad.net/webupd8team/java/ubuntu vivid main

#### Samsung Unified Linux Driver Repository (SULDR) - http://www.bchemnet.com/suldr/index.html
## Run this command: wget -O - http://www.bchemnet.com/suldr/suldr.gpg | sudo apt-key add -
deb http://www.bchemnet.com/suldr/ debian extra

#### SimpleScreenRecorder PPA - http://www.maartenbaert.be/simplescreenrecorder/
## Run this command: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 283EC8CD
deb http://ppa.launchpad.net/maarten-baert/simplescreenrecorder/ubuntu xenial main

#### VirtualBox - http://www.virtualbox.org
## Run this command: wget -q http://download.virtualbox.org/virtualbox/debian/oracle_vbox_2016.asc -O- | sudo apt-key add -
deb http://download.virtualbox.org/virtualbox/debian xenial contrib

#### Wine PPA - https://launchpad.net/~ubuntu-wine/+archive/ppa/
## Run this command: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 883E8688397576B6C509DF495A9A06AEF9CB8DB0
deb http://ppa.launchpad.net/ubuntu-wine/ppa/ubuntu xenial main


## Run this aggregated command: sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 614C4B38; wget -q https://dl.google.com/linux/linux_signing_key.pub -O- | sudo apt-key add -; sudo wget -O - http://deb.opera.com/archive.key | sudo apt-key add -; sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys EEA14886; wget -O - http://www.bchemnet.com/suldr/suldr.gpg | sudo apt-key add -; sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 283EC8CD; wget -q http://download.virtualbox.org/virtualbox/debian/oracle_vbox_2016.asc -O- | sudo apt-key add -; sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 883E8688397576B6C509DF495A9A06AEF9CB8DB0


Wednesday, October 5, 2016

How to Avoid Interactive SSH Prompts for Git Clone and SSH in General with bitbucket and Good for github as Well

So, I'm searching for a mundane way to bypass the unkown host manual interaction of cloning a git repo as shown below:

brad@computer:~$ git clone git@bitbucket.org:viperks/viperks-api.git
Cloning into 'viperks-api'...
The authenticity of host 'bitbucket.org (104.192.143.3)' can't be established.
RSA key fingerprint is 97:8c:1b:f2:6f:14:6b:5c:3b:ec:aa:46:46:74:7c:40.
Are you sure you want to continue connecting (yes/no)?


Note the RSA key fingerprint...

So, this is a SSH thing, this will work for git over SSH and just SSH related things in general...

brad@computer:~$ nmap bitbucket.org --script ssh-hostkey

Starting Nmap 7.01 ( https://nmap.org ) at 2016-10-05 10:21 EDT
Nmap scan report for bitbucket.org (104.192.143.3)
Host is up (0.032s latency).
Other addresses for bitbucket.org (not scanned): 104.192.143.2 104.192.143.1 2401:1d80:1010::150
Not shown: 997 filtered ports
PORT    STATE SERVICE
22/tcp  open  ssh
| ssh-hostkey:
|   1024 35:ee:d7:b8:ef:d7:79:e2:c6:43:9e:ab:40:6f:50:74 (DSA)
|_  2048 97:8c:1b:f2:6f:14:6b:5c:3b:ec:aa:46:46:74:7c:40 (RSA)
80/tcp  open  http
443/tcp open  https

Nmap done: 1 IP address (1 host up) scanned in 42.42 seconds


First, install nmap. nmap is highly helpful for certain things, like this-- verifying manually SSH fingerprints. But, back to what we are doing.

Good. I'm either compromised at the multiple places and machines I've checked it-- or the more plausible explanation of everything being hunky dory is what is happening.

That 'fingerprint' is just a string shortened with a one way algorithm for our human convenience at the risk of more than one string resolving into the same fingerprint. It happens, they are called collisions.

Regardless, back to the original string which we can see in context below.

brad@computer:~$ ssh-keyscan bitbucket.org
# bitbucket.org SSH-2.0-conker_1.0.257-ce87fba app-128
no hostkey alg
# bitbucket.org SSH-2.0-conker_1.0.257-ce87fba app-129
bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==
# bitbucket.org SSH-2.0-conker_1.0.257-ce87fba app-123
no hostkey alg


So, ahead of time, we have a way of asking for a form of identification from the original host.

At this point we manually are as vulnerable as automatically-- the strings match, we have the base data that creates the fingerprint, and we could ask for that base data (preventing collisions) in the future.

Now to use that string in a way that prevents asking about a hosts authenticity...

The known_hosts file in this case does not use plaintext entries. You'll know hashed entries when you see them, they look like hashes with random characters instead of xyz.com or 123.45.67.89.

brad@computer:~$ ssh-keyscan -t rsa -H bitbucket.org
# bitbucket.org SSH-2.0-conker_1.0.257-ce87fba app-128
|1|yr6p7i8doyLhDtrrnWDk7m9QVXk=|LuKNg9gypeDhfRo/AvLTAlxnyQw= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==


The first comment line infuriatingly shows up-- but you can get rid of it with a simple redirect via the ">" or ">>" convention.

As I've done my best to obtain untainted data to be used to identify a "host" and trust, I will add this identification to my known_hosts file in my ~/.ssh directory. Since it will now be identified as a known host, I will not get the prompt mentioned above when you were a youngster.

Thanks for sticking with me, here you go. I'm adding the bitbucket RSA key so that I can interact with my git repositories there in a non-interactive way as part of a CI workflow, but whatever you do what you want.

#!/bin/bash
cp ~/.ssh/known_hosts ~/.ssh/known_hosts.old && echo "|1|yr6p7i8doyLhDtrrnWDk7m9QVXk=|LuKNg9gypeDhfRo/AvLTAlxnyQw= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==" >> ~/.ssh/known_hosts

So, that's how you stay a virgin for today. You can do the same with github by following similar directions on your own time.

I saw so many stack overflow posts telling you to programmatically add the key blindly without any kind of checking. The more you check the key from different machines on different networks, the more trust you can have that the host is the one it says it is-- and that is the best you can hope from this layer of security.

WRONG
ssh -oStrictHostKeyChecking=no hostname [command]

WRONG
ssh-keyscan -t rsa -H hostname >> ~/.ssh/known_hosts


Don't do either of these things, please. You're given the opportunity to increase your chances of avoiding someone eavesdropping on your data transfers via a man in the middle attack-- take that opportunity. The difference is literally verifying that the RSA key you have is the one of the bona fide server and now you know how to get that information to compare them so you can trust the connection. Just remember more comparisons from different computers & networks will usually increase your ability to trust the connection.

Friday, September 23, 2016

Stupid Concatenated SSL .pem Maker

I seem to forget which order concatenated .pem certificates go in. I only do this kind of thing every so often.

The short answer is:

Private key on top.
Certificate from the signing authority in the middle.
Certificate Authority Chain file next.

https://github.com/bradchesney79/make-concatenated-ssl-pem

I have seen people refer to both the Certificate Authority Chain file and the file my script creates as a bundle. That is wrong-- the bundle is all two or three things (sometimes the chain file is optional).

If you want to know more about why and how certificates work, keep reading.

You start by generating a private key, this is a string that can be used mathematically to modify data in a way that it is not easily read by conventional means. This key is generally used to make a certificate signing request (CSR)-- it is a block of text based upon the private key that allows a Certificate Authority to issue an SSL certificate without you giving them your private key.

A Certificate Authority(CA) signs and distributes SSL certificates. SSL certificates are used in a similar manner as a public key and has meta data about your site such as the domain name secured, where the 'owner' of the site says the site is based, what level of 'reversible entropy(chaos)' is used to encrypt the data, what type of encryption to use, and sometimes less commonly used factoids. You will see 2048-bit or SHA256 for current levels and types of encryption for instance.

There is one root Certificate Authority in a chain-- which is conceptually shaped more like a tree in reality. The next top most Certificate Authorities are all verified as trusted by the root Certificate Authority. And layer upon layer so on and so forth so that the demonstration CA "Deering" and CA "Dunkirk" are trusted by CA "Catskills" who in turn is trusted by CA "Bridgeport". CA "Bridgeport" is trusted by none other than the root CA, CA "Alpha".



The Certificate Authority Chain file allows the browser to identify which signing authority said the site in question can be trusted. Due to it being a chain of trust, there may be more than one block. Some CA Chain files have more blocks to facilitate verifying trust for web 'agents' (usually a browser) in case the parent entity issuing the certificate isn't one your browser inherently trusts.

Certificates that are inherently trusted happen because the entities publishing the browser software may decide to include portions of the chain to bypass certain lookups and speed up verification of trust. Say a website is being provided by CA Eastwood. The browser asked Dunkirk about Eastwood, Dunkirk says yes Eastwood is good. Your browser asks Catskills if Dunkirk is trusted, Catskills affirms Dunkirk is trusted. Your browser then asks Bridgeport if Catskills is trusted, which it says Bridgeport is trusted. Finally, your browser asks Alpha if Bridgeport is on the up & up, and it is so your webpage you asked for shows up on the screen instead of an SSL warning screen.

Much of this is silly, the publisher decides Catskills is as far down as it will be asking, instead of going to the Internet and verifying up and down the chain every little detail-- many of the bits and pieces are installed right in the browser reduces a 5 entity request process by three requests, lessening load times and network traffic.

On a sort of related note, these preinstalled certifications are a good reason to keep your browser updated. Changes in which certs are sanctioned for good or bad happen with browser application updates.

And at a high level, that is how it works.

Protip for getting to the bottom. After installing the browser-- you can install the certificates your favorite sites use with the publisher installed certs and it can help load times-- more for smaller sites that may be paying bottom dollar for SSL certificates from a CA in the nosebleed seats of the chain of trust.

Tuesday, July 12, 2016

Fun Stuff with MySQL and Forks Giving Analyse() a Whirl

Go ahead, try it on your tables...

mysql> select * from users procedure ANALYSE();

Yeah, it's neat. Tells you all kinds of stuff like what the optimal column data types or lengths might be. Curious about what the longest length value is for a particular column in a table-- this "procedure ANALYSE()" will tell you.

Thursday, June 16, 2016

Docker Push Trials and Tribulations...

So, we've got a little bit of an *buntu, docker, & dockerhub SNAFU.

Cannot push to my little corner of the default and standard repo. I kept seeing docker.io when I was hoping to see hub.docker.com . I mean that is how the URL works for github why wouldn't dockerhub work the same way? Right? Wrong.

So, below is the problem:

[brad@T540p testContainer]$ ls
Dockerfile
[brad@T540p testContainer]$ docker build -t bradchesney79/test:latest .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM bradchesney79/ubuntu:20160615
 ---> 9cde4627020a
Step 2 : MAINTAINER Brad Chesney <bradchesney79@gmail.com>
 ---> Running in aa3bc6c8be24
 ---> 2a8f3a7e4d36
Removing intermediate container aa3bc6c8be24
Step 3 : RUN apt-get -y update && apt-get -y upgrade
 ---> Running in 39c0d05faae7
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [94.5 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial-updates/main Sources [84.3 kB]
Get:5 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [268 kB]
Get:6 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [117 kB]
Fetched 659 kB in 1s (586 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
Calculating upgrade...
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
 ---> c97c8d3d88be
Removing intermediate container 39c0d05faae7
Successfully built c97c8d3d88be
[brad@T540p testContainer]$ docker login -u bradchesney79 -p password hub.docker.com && docker push bradchesney79/test:latest
Login Succeeded
The push refers to a repository [docker.io/bradchesney79/test]
bcf9ff8e307f: Preparing
75ea94f4fa07: Preparing
3c584ff3f03d: Preparing
5f70bf18a086: Preparing
737f40e80b7f: Preparing
82b57dbc5385: Waiting
19429b698a22: Waiting
9436069b92a3: Waiting
unauthorized: authentication required



The first part of last command was the problem. Needs --email or -e as part of the login credentials as shown below:


[brad@T540p testContainer]$ docker login -u bradchesney79 -e bradchesney79@gmail.com -p password hub.docker.com && docker push bradchesney79/test:latest
Warning: '-e' is deprecated, it will be removed soon. See usage.
Login Succeeded
The push refers to a repository [docker.io/bradchesney79/test]
bcf9ff8e307f: Pushed
75ea94f4fa07: Pushed
3c584ff3f03d: Pushed
5f70bf18a086: Pushed
737f40e80b7f: Pushed
82b57dbc5385: Pushed
19429b698a22: Pushed
9436069b92a3: Pushed
latest: digest: sha256:4f3069d3833c4519f5aa0fc5b805bde9963c705ef8c66aa1e472f451053f4bd9 size: 1985


Note the deprecated -e flag... fun stuffs. I wonder what I am supposed to use instead when -e stops working?

Wednesday, April 13, 2016

Quick Login Woes

So, off the top of my head I had a slower than I would like login to three sites today.

I don't mean to say they are bad sites or whatever. What I do mean to say is that it would be nice to be able to log in from the initial landing page.

The three I will name and the different ways they could be more efficient to use:

gMail

Come on Google, I get that allowing me to fill in my email or account name, clicking, having the account textbox fade out, the password textbox fade in, and then letting me give my password before submitting is nice, smooth, and fancy. Well done for the goals attempted.

But, I'd rather both textboxes and the submit button be available right on the index page.

Funimation

I don't want to dump on this company, they have provided me much entertainment for the very reasonable price of $8 a month. However, my first visit is redirected to http://www.funimation.com/welcome which has a login link. At least when you click that link from the welcome page, the login fields are showing by default at http://www.funimation.com/home. I want them to know that improvement is noticed and appreciated as formerly that was not the case.

But, I'd rather both textboxes and the submit button be available right on the index page.

Evernote

Again, well done for their purposeful execution of the designed functionality. But, after I land on the index page I have to click, then I can log in. This one is the most frustrating as I use this site so often. Aside from the initial unnecessary, annoying click, I like using it and it is very helpful in my day to day.

But, I'd rather both textboxes and the submit button be available right on the index page.

I would love to hear why these designs are unavoidable and not any excuses about brute force bots, which won't have any trouble navigating to the Step 2 where logging in happens.

Sunday, March 6, 2016

Unable to find expected entry 'main/binary-i386/Packages' in Release file (Wrong sources.list entry or malformed file)

So, I was getting an error when updating due to my laptop automatically wanting to install the 32-bit Chrome browser package from the Google deb repository.

The error verbatim:
Failed to fetch http://dl.google.com/linux/chrome/deb/dists/stable/Release  Unable to find expected entry 'main/binary-i386/Packages' in Release file (Wrong sources.list entry or malformed file)

The solution is to specify you want the package for 64 bit architecture in your sources. You can do it in your /etc/apt/sources.list or via files within /etc/apt/sources.list.d/ (which is where Google installs its PPA for you).

Notice the architecture specification in the deb PPA line:
deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main

This is my whole sources.list file with the exception of the ubuntu make for installing android studio-- because why not:

#------------------------------------------------------------------------------#
#                            OFFICIAL UBUNTU REPOS                             #
#------------------------------------------------------------------------------#


###### Ubuntu Main Repos
deb http://us.archive.ubuntu.com/ubuntu/ wily main restricted universe multiverse

###### Ubuntu Update Repos
deb http://us.archive.ubuntu.com/ubuntu/ wily-security main restricted universe multiverse
deb http://us.archive.ubuntu.com/ubuntu/ wily-updates main restricted universe multiverse
deb http://us.archive.ubuntu.com/ubuntu/ wily-backports main restricted universe multiverse

###### Ubuntu Partner Repo
deb http://archive.canonical.com/ubuntu wily partner

#------------------------------------------------------------------------------#
#                           UNOFFICIAL UBUNTU REPOS                            #
#------------------------------------------------------------------------------#


###### 3rd Party Binary Repos

#### Gimp PPA - https://launchpad.net/~otto-kesselgulasch/+archive/gimp
## Run this command: sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 614C4B38
deb http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu wily main

#### Google Chrome Browser - http://www.google.com/linuxrepositories/
## Run this command: wget -q https://dl-ssl.google.com/linux/linux_signing_key.pub -O- | sudo apt-key add -
deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main

#### Kubuntu Backports PPA - https://edge.launchpad.net/~kubuntu-ppa/+archive/backports
## Run this command: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 8AC93F7A
deb http://ppa.launchpad.net/kubuntu-ppa/backports/ubuntu wily main

#### LibreOffice PPA - http://www.documentfoundation.org/download/
## Run this command: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 1378B444
deb http://ppa.launchpad.net/libreoffice/ppa/ubuntu wily main

#### Opera - http://www.opera.com/
## Run this command: sudo wget -O - http://deb.opera.com/archive.key | sudo apt-key add -
deb http://deb.opera.com/opera/ stable non-free

#### Oracle Java (JDK) Installer PPA - http://www.webupd8.org/2012/01/install-oracle-java-jdk-7-in-ubuntu-via.html
## Run this command: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys EEA14886
deb http://ppa.launchpad.net/webupd8team/java/ubuntu wily main

#### VirtualBox - http://www.virtualbox.org
## Run this command: wget -q http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc -O- | sudo apt-key add -
deb http://download.virtualbox.org/virtualbox/debian wily contrib

#### Linode CLI
## Run this command: wget -O- https://apt.linode.com/linode.gpg | sudo apt-key add -
deb http://apt.linode.com/ stable main

Friday, February 5, 2016

Getting basic relevant Macbook specs from the command line

So, this is a command to show you the specs, then dump the information on your Desktop in an XML file. Yaaay!

system_profiler SPHardwareDataType SPSerialATADataType SPAirPortDataType SPDisplaysDataType -detailLevel mini && system_profiler SPHardwareDataType SPSerialATADataType SPAirPortDataType SPDisplaysDataType -detailLevel mini -xml > "$HOME/Desktop/$USER-system-profile.xml"

"Man, you could do it with this one super obscure convention you would hardly ever use without the &&..."
'Yeah, probably.'

Sunday, January 31, 2016

Lenovo s10-3 Netflix fix

So, when you unload video from the Intel Atom N45x series processor, thing get better six ways of Sunday. But, then you've filled up your regular mini PCIe slot and you have to go with USB Wifi some way or another. They make USB Wifi modules that use the USB wiring in the other little mini PCIe slot (since there is no PCIe connection in the extra slot).

The parts are linked below:

Broadcom BCM70015 Crystal HD PCI Express Mini Card Video/Audio Hardware DecoderHardware Decoder for Apple TV 1080p(TBS7015)
A discrete mini PCIE video card

SparkLAN WPER-172GN / 802.11nbg 1Tx2R MIMO / USB Half-Size MiniCard (Ralink RT5390U)
A 802.11n/g/b/a Wifi card with a mini PCIe connection-- but wired for USB

People also have reported luck with the Qcom Q802XKG.
Qcom QCM-Q802XKG Wireless WiFi WLAN Mini PCI-E Card 54Mbps 802.11b/g 60-6229-1A
Qcom QCM-Q802XKG

Followers