Posts Tagged ‘security’
WordPress 5.x Hardening Guide for CentOS 7.6
This document explains the process of installation, configuration and hardening of Apache server from source files, based on CentOS 7.6 default installation (Linux Firewall and SELinux enabled by default), including support for TLS v1.2 and PHP 7.3
- Pre-Requirements
-
- Linux server installed with CentOS 7.6 (64bit)
- policycoreutils-python-* package installed
- setools-libs-* package installed
- libcgroup-* package installed
- audit-libs-python-* package installed
- libsemanage-python-* package installed
- gcc* package installed
- gcc-c++* package installed
- autoconf* package installed
- automake* package installed
- libtool* package installed
- perl-core package installed
- zlib-devel package installed
- expat-devel package installed
- yum-utils package installed
- OpenSSL upgrade phase
- Login using privileged account
- Run the commands below to download the latest build of OpenSSL:
cd /usr/local/src
wget https://www.openssl.org/source/openssl-1.1.1.tar.gz
tar -xvzf openssl-1.1.1.tar.gz - Run the commands below to compile the latest build of OpenSSL:
cd openssl-1.1.1
./config --prefix=/usr/local/ssl --openssldir=/usr/local/ssl shared zlib
make
make test
make install - Edit using VI the file /etc/ld.so.conf.d/openssl-1.1.1.conf and add the following string to the file:
/usr/local/ssl/lib
- Run the command below to reload the dynamic link:
ldconfig -v
- Backup the original OpenSSL binary:
mv /usr/bin/openssl /usr/bin/openssl.BEKUP
- Create using VI the file /etc/profile.d/openssl.sh and add the following content:
#Set OPENSSL_PATH
OPENSSL_PATH=/usr/local/ssl/bin
export OPENSSL_PATH
PATH=$PATH:$OPENSSL_PATH
export PATH - Run the commands below to complete the configuration of the OpenSSL:
chmod +x /etc/profile.d/openssl.sh
source /etc/profile.d/openssl.sh
echo $PATH
which openssl
- Apache 2.4.6 installation phase
- Login using privileged account
- Run the command below to install Apache 2.4.6:
yum install httpd -y
- Updating Ownership and Permissions on Apache folders:
chown root:root /usr/sbin/apachectl
chown root:root /usr/sbin/httpd
chmod 770 /usr/sbin/apachectl
chmod 770 /usr/sbin/httpd
chown -R root:root /etc/httpd
chmod -R go-r /etc/httpd
chown -R root:root /etc/httpd/logs
chmod -R 700 /etc/httpd/logs - Create folder for the web content:
mkdir -p /www
- Updating Ownership and Permissions on the web content folder:
chown -R root /www
chmod -R 775 /www - Fix the SELinux security context on the new web folder:
semanage fcontext -a -t httpd_sys_content_t "/www(/.*)?"
restorecon -F -R -v /www
chcon -R -t httpd_sys_content_t /www - Create folder for the first WordPress site:
mkdir /www/WebSiteA
Note: Replace WebSiteA with the relevant name - Create folder for the secondWordPress site:
mkdir /www/WebSiteB
Note: Replace WebSiteB with the relevant name - Create logs folder for the first WordPress site:
mkdir /www/WebSiteA/logs
Note: Replace WebSiteA with the relevant name - Create logs folder for the second WordPress site:
mkdir /www/WebSiteB/logs
Note: Replace WebSiteB with the relevant name - Configure permissions on the logs folder for the first WordPress site:
chown -R apache:apache /www/WebSiteA/logs
chmod -R 700 /www/WebSiteA/logs
Note: Replace WebSiteA with the relevant name - Configure permissions on the logs folder for the second WordPress site:
chown -R apache:apache /www/WebSiteB/logs
chmod -R 700 /www/WebSiteB/logs
Note: Replace WebSiteB with the relevant name - Fix the SELinux security context on the new web folder for the first WordPress site:
semanage fcontext -a -t httpd_log_t "/www/WebSiteA/logs(/.*)?"
restorecon -F -R -v /www/WebSiteA/logs
chcon -R -t httpd_log_t /www/WebSiteA/logs
Note: Replace WebSiteA with the relevant name - Fix the SELinux security context on the new web folder for the second WordPress site:
semanage fcontext -a -t httpd_log_t "/www/WebSiteB/logs(/.*)?"
restorecon -F -R -v /www/WebSiteB/logs
chcon -R -t httpd_log_t /www/WebSiteB/logs
Note: Replace WebSiteB with the relevant name - Create the following folders:
mkdir /etc/httpd/sites-available
mkdir /etc/httpd/sites-enabled - Edit using VI the file /etc/httpd/conf/httpd.conf and change the following strings:
From:
LogLevel warn
To:
LogLevel notice
From:
DocumentRoot "/var/www/html"
To:
# DocumentRoot "/var/www/html"
From:
ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"
To:
# ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"
- Comment out the entire sections below inside the /etc/httpd/conf/httpd.conf
<Directory />
<Directory "/var/www">
<Directory "/var/www/html">
<Directory "/var/www/cgi-bin">
- Add the following sections to the end of the /etc/httpd/conf/httpd.conf file:
IncludeOptional sites-enabled/*.conf
# Configure custom error message:
ErrorDocument 400 "The requested URL was not found on this server."
ErrorDocument 401 "The requested URL was not found on this server."
ErrorDocument 403 "The requested URL was not found on this server."
ErrorDocument 404 "The requested URL was not found on this server."
ErrorDocument 405 "The requested URL was not found on this server."
ErrorDocument 408 "The requested URL was not found on this server."
ErrorDocument 410 "The requested URL was not found on this server."
ErrorDocument 411 "The requested URL was not found on this server."
ErrorDocument 412 "The requested URL was not found on this server."
ErrorDocument 413 "The requested URL was not found on this server."
ErrorDocument 414 "The requested URL was not found on this server."
ErrorDocument 415 "The requested URL was not found on this server."
ErrorDocument 500 "The requested URL was not found on this server."
# Configure Server Tokens
ServerTokens Prod
# Disable Server Signature
ServerSignature Off
# Disable Tracing
TraceEnable Off
# Maximum size of the request body.
LimitRequestBody 4000000
# Maximum number of request headers in a request.
LimitRequestFields 40
# Maximum size of request header lines.
LimitRequestFieldSize 4000
# Maximum size of the request line.
LimitRequestLine 4000
MaxRequestsPerChild 10000
# Configure clickjacking protection
Header always append X-Frame-Options SAMEORIGIN - Remove the files below:
mv /etc/httpd/conf.d/autoindex.conf /etc/httpd/conf.d/autoindex.conf.bak
mv /etc/httpd/conf.d/userdir.conf /etc/httpd/conf.d/userdir.conf.bak - Comment out the lines inside the /etc/httpd/conf.modules.d/00-base.conf file below to disable default modules:
LoadModule status_module modules/mod_status.so
LoadModule info_module modules/mod_info.so
LoadModule autoindex_module modules/mod_autoindex.so
LoadModule include_module modules/mod_include.so
LoadModule userdir_module modules/mod_userdir.so
LoadModule env_module modules/mod_env.so
LoadModule negotiation_module modules/mod_negotiation.so
LoadModule actions_module modules/mod_actions.so - Comment out the lines inside the /etc/httpd/conf.modules.d/01-cgi.conf file below to disable default modules:
LoadModule cgi_module modules/mod_cgi.so
- Using VI, create configuration file for the first WordPress site called /etc/httpd/sites-available/websitea.com.conf with the following content:
<VirtualHost *:80>
ServerAdmin admin@websitea.com
ServerName www.websitea.com
ServerAlias websitea.com
DocumentRoot /www/WebSiteA
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /www/WebSiteA>
Options Indexes FollowSymLinks MultiViews
AllowOverride all
Require all granted
Order allow,deny
Allow from all
<LimitExcept GET POST>
deny from all
</limitexcept>
</Directory>
ErrorLog /www/WebSiteA/logs/error.log
CustomLog /www/WebSiteA/logs/access.log combined
</VirtualHost>
Note: Replace WebSiteA with the relevant name - Using VI, create configuration file for the first WordPress site called /etc/httpd/sites-available/websiteb.com.conf with the following content:
<VirtualHost *:80>
ServerAdmin admin@websiteb.com
ServerName www.websiteb.com
ServerAlias websiteb.com
DocumentRoot /www/WebSiteB
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /www/WebSiteB>
Options Indexes FollowSymLinks MultiViews
AllowOverride all
Require all granted
Order allow,deny
Allow from all
<LimitExcept GET POST>
deny from all
</limitexcept>
</Directory>
ErrorLog /www/WebSiteB/logs/error.log
CustomLog /www/WebSiteB/logs/access.log combined
</VirtualHost>
Note: Replace WebSiteB with the relevant name - Run the commands below to enable the new virtual host files:
ln -s /etc/httpd/sites-available/websitea.com.conf /etc/httpd/sites-enabled/websitea.com.conf
ln -s /etc/httpd/sites-available/websiteb.com.conf /etc/httpd/sites-enabled/websiteb.com.conf
Note 1: Replace WebSiteA with the relevant name
Note 2: Replace WebSiteB with the relevant name - Run the command below to configure Apache to load at startup:
systemctl enable httpd
- To start the Apace service, run the command below:
systemctl start httpd
- Run the commands below to enable HTTPD rule on the firewall:
firewall-cmd --zone=public --add-service=http --permanent
systemctl restart firewalld
- MariaDB installation phase
- Login using privileged account
- Install MariaDB:
yum install -y mariadb-server mariadb-client
- Enable the MariaDB service:
systemctl enable mariadb.service
- Start the MariaDB service:
systemctl start mariadb.service
- Run the command bellow to set ownership and permissions for /etc/my.cnf file:
chown root /etc/my.cnf
chmod 644 /etc/my.cnf - Edit using VI, the file /etc/my.cnf and add the string bellow under the [mysqld] section
bind-address = 127.0.0.1
- Run the command below to secure the MySQL:
mysql_secure_installation
- Specify the MySQL root account password (leave blank) -> Press Y to set the Root password -> specify new complex password (at least 14 characters, upper case, lower case, number, special characters) and document it -> Press Y to remove anonymous users -> Press Y to disallow root login remotely -> Press Y to remove test database -> Press Y to reload privilege tables and exit the script.
- Restart the MariaDB service:
systemctl restart mariadb.service
- PHP 7.3 installation phase
- Login using privileged account
- Run the commands below to install PHP 7.3:
yum install http://rpms.remirepo.net/enterprise/remi-release-7.rpm -y
yum-config-manager --enable remi-php73
yum install php php-mcrypt php-cli php-gd php-curl php-mysql php-ldap php-zip php-fileinfo -y - Change the permissions on the php.ini file:
chmod 640 /etc/php.ini
- Edit using VI, the file /etc/php.ini
From:
mysqli.default_host =
To:
mysqli.default_host = 127.0.0.1:3306
From:
allow_url_fopen = On
To:
allow_url_fopen = Off
From:
expose_php = On
To:
expose_php = Off
From:
memory_limit = 128M
To:
memory_limit = 8M
From:
post_max_size = 8M
To:
post_max_size = 2M
From:
upload_max_filesize = 2M
To:
upload_max_filesize = 1M
From:
disable_functions =
To:
disable_functions = fpassthru,crack_check,crack_closedict,crack_getlastmessage,crack_opendict, psockopen,php_ini_scanned_files,shell_exec,chown,hell-exec,dl,ctrl_dir,phpini,tmp,safe_mode,systemroot,server_software, get_current_user,HTTP_HOST,ini_restore,popen,pclose,exec,suExec,passthru,proc_open,proc_nice,proc_terminate, proc_get_status,proc_close,pfsockopen,leak,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid, posix_setsid,posix_setuid,escapeshellcmd,escapeshellarg,posix_ctermid,posix_getcwd,posix_getegid,posix_geteuid,posix_getgid,posix_getgrgid, posix_getgrnam,posix_getgroups,posix_getlogin,posix_getpgid,posix_getpgrp,posix_getpid, posix_getppid,posix_getpwnam,posix_getpwuid,posix_getrlimit,system,posix_getsid,posix_getuid,posix_isatty, posix_setegid,posix_seteuid,posix_setgid,posix_times,posix_ttyname,posix_uname,posix_access,posix_get_last_error,posix_mknod, posix_strerror,posix_initgroups,posix_setsidposix_setuid
systemctl restart httpd.service
- Login using privileged account.
- Run the command bellow to login to the MariaDB:
/usr/bin/mysql -uroot -p
Note: When prompted, specify the password for the MariaDB root account. - Run the following commands from the MariaDB prompt:
CREATE USER 'blgusr'@'localhost' IDENTIFIED BY 'A3fg1j7x!s2gEq';
CREATE USER 'hswjm'@'localhost' IDENTIFIED BY 'hj5fa1fnu@zw0p';
CREATE DATABASE m6gf42s;
CREATE DATABASE b7mf3aq;
GRANT ALL PRIVILEGES ON m6gf42s.* TO "blgusr"@"localhost" IDENTIFIED BY "A3fg1j7x!s2gEq";
GRANT ALL PRIVILEGES ON b7mf3aq.* TO "hswjm"@"localhost" IDENTIFIED BY "hj5fa1fnu@zw0p";
FLUSH PRIVILEGES;
quit
Note 1: Replace “blgusr” with a username to access first the database.
Note 2: Replace “A3fg1j7x!s2gEq” with complex password for the account who will access the first database (at least 14 characters, upper case, lower case, number, special characters).
Note 3: Replace “hswjm” with a username to access second the database.
Note 4: Replace “hj5fa1fnu@zw0p” with complex password for the account who will access the second database (at least 14 characters, upper case, lower case, number, special characters).
Note 5: Replace “m6gf42s” with the first WordPress database name.
Note 6: Replace “b7mf3aq” with the second WordPress database name. - Run the commands below to download the latest build of WordPress:
cd /usr/local/src
wget https://wordpress.org/latest.zip
unzip latest.zip -d /www/WebSiteA
unzip latest.zip -d /www/WebSiteB
Note 1: Replace WebSiteA with the relevant name
Note 2: Replace WebSiteB with the relevant name - Fix the SELinux security context on the new web folder for the first WordPress site:
semanage fcontext -a -t httpd_sys_content_t "/www/WebSiteA(/.*)?"
restorecon -F -R -v /www/WebSiteA
chcon -R -t httpd_sys_content_t /www/WebSiteA
semanage fcontext -a -t httpd_sys_rw_content_t "/www/WebSiteA/wp-content(/.*)?"
restorecon -F -R -v /www/WebSiteA/wp-content
chcon -R -t httpd_sys_rw_content_t /www/WebSiteA/wp-content
Note: Replace WebSiteA with the relevant name - Fix the SELinux security context on the new web folder for the second WordPress site:
semanage fcontext -a -t httpd_sys_content_t "/www/WebSiteB(/.*)?"
restorecon -F -R -v /www/WebSiteB
chcon -R -t httpd_sys_content_t /www/WebSiteB
semanage fcontext -a -t httpd_sys_rw_content_t "/www/WebSiteB/wp-content(/.*)?"
restorecon -F -R -v /www/WebSiteB/wp-content
chcon -R -t httpd_sys_rw_content_t /www/WebSiteB/wp-content
Note: Replace WebSiteB with the relevant name - Create using VI the file /www/WebSiteA/config.php with the following content:
<?php
define('DB_NAME', 'm6gf42s');
define('DB_USER', 'blgusr');
define('DB_PASSWORD', 'A3fg1j7x!s2gEq');
define('DB_HOST', 'localhost');
$table_prefix = 'm6gf42s_';
define('AUTH_KEY', 'put your unique phrase here');
define('SECURE_AUTH_KEY', 'put your unique phrase here');
define('LOGGED_IN_KEY', 'put your unique phrase here');
define('NONCE_KEY', 'put your unique phrase here');
define('AUTH_SALT', 'put your unique phrase here');
define('SECURE_AUTH_SALT', 'put your unique phrase here');
define('LOGGED_IN_SALT', 'put your unique phrase here');
define('NONCE_SALT', 'put your unique phrase here');
define('FS_METHOD', 'direct');
?>
Note 1: Make sure there are no spaces, newlines, or other strings before an opening ‘< ?php’ tag or after a closing ‘?>’ tag.
Note 2: Replace “blgusr” with MariaDB account to access the first database.
Note 3: Replace “A3fg1j7x!s2gEq” with complex password (at least 14 characters).
Note 4: Replace “m6gf42s” with the first WordPress database name.
Note 5: In-order to generate random values for the AUTH_KEY, SECURE_AUTH_KEY, LOGGED_IN_KEY and NONCE_KEY, use the web site bellow:
http://api.wordpress.org/secret-key/1.1/ - Create using VI the file /www/WebSiteB/config.php with the following content:
<?php
define('DB_NAME', 'b7mf3aq');
define('DB_USER', 'hswjm');
define('DB_PASSWORD', 'hj5fa1fnu@zw0p');
define('DB_HOST', 'localhost');
$table_prefix = 'b7mf3aq_';
define('AUTH_KEY', 'put your unique phrase here');
define('SECURE_AUTH_KEY', 'put your unique phrase here');
define('LOGGED_IN_KEY', 'put your unique phrase here');
define('NONCE_KEY', 'put your unique phrase here');
define('AUTH_SALT', 'put your unique phrase here');
define('SECURE_AUTH_SALT', 'put your unique phrase here');
define('LOGGED_IN_SALT', 'put your unique phrase here');
define('NONCE_SALT', 'put your unique phrase here');
define('FS_METHOD', 'direct');
?>
Note 1: Make sure there are no spaces, newlines, or other strings before an opening ‘< ?php’ tag or after a closing ‘?>’ tag.
Note 2: Replace “hswjm” with MariaDB account to access the second database.
Note 3: Replace “hj5fa1fnu@zw0p” with complex password (at least 14 characters).
Note 4: Replace “b7mf3aq” with the second WordPress database name.
Note 5: In-order to generate random values for the AUTH_KEY, SECURE_AUTH_KEY, LOGGED_IN_KEY and NONCE_KEY, use the web site bellow:
http://api.wordpress.org/secret-key/1.1/ - Copy the wp-config.php file:
cp /www/WebSiteA/wordpress/wp-config-sample.php /www/WebSiteA/wordpress/wp-config.php
cp /www/WebSiteB/wordpress/wp-config-sample.php /www/WebSiteB/wordpress/wp-config.php
Note 1: Replace WebSiteA with the relevant name
Note 2: Replace WebSiteB with the relevant name - Edit using VI, the file /www/WebSiteA/wordpress/wp-config.php
Add the following lines before the string “That’s all, stop editing! Happy blogging”:
/* Multisite */
define('WP_ALLOW_MULTISITE', true);
include('/www/WebSiteA/config.php');
Remove or comment the following sections:
define('DB_NAME', 'putyourdbnamehere');
define('DB_USER', 'usernamehere');
define('DB_PASSWORD', 'yourpasswordhere');
define('DB_HOST', 'localhost');
$table_prefix = 'wp_';
define('AUTH_KEY', 'put your unique phrase here');
define('SECURE_AUTH_KEY', 'put your unique phrase here');
define('LOGGED_IN_KEY', 'put your unique phrase here');
define('NONCE_KEY', 'put your unique phrase here');
define('AUTH_SALT', 'put your unique phrase here');
define('SECURE_AUTH_SALT', 'put your unique phrase here');
define('LOGGED_IN_SALT', 'put your unique phrase here');
define('NONCE_SALT', 'put your unique phrase here');
Note: Replace WebSiteA with the relevant name - Edit using VI, the file /www/WebSiteB/wordpress/wp-config.php
Add the following lines before the string “That’s all, stop editing! Happy blogging”:
/* Multisite */
define('WP_ALLOW_MULTISITE', true);
include('/www/WebSiteB/config.php');
Remove or comment the following sections:
define('DB_NAME', 'putyourdbnamehere');
define('DB_USER', 'usernamehere');
define('DB_PASSWORD', 'yourpasswordhere');
define('DB_HOST', 'localhost');
$table_prefix = 'wp_';
define('AUTH_KEY', 'put your unique phrase here');
define('SECURE_AUTH_KEY', 'put your unique phrase here');
define('LOGGED_IN_KEY', 'put your unique phrase here');
define('NONCE_KEY', 'put your unique phrase here');
define('AUTH_SALT', 'put your unique phrase here');
define('SECURE_AUTH_SALT', 'put your unique phrase here');
define('LOGGED_IN_SALT', 'put your unique phrase here');
define('NONCE_SALT', 'put your unique phrase here');
Note: Replace WebSiteB with the relevant name - Create using VI the file /www/WebSiteA/wordpress/.htaccess and add the following content:
# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPress
Header set X-XSS-Protection "1; mode=block"
Header set X-Content-Type-Options nosniff
Header set Content-Security-Policy "default-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:"
Note: Replace WebSiteA with the relevant name - Create using VI the file /www/WebSiteA/wordpress/wp-content/.htaccess and add the following content:
# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPress
Note: Replace WebSiteA with the relevant name - Create using VI the file /www/WebSiteA/wordpress/wp-includes/.htaccess and add the following content:
# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPress
Note: Replace WebSiteA with the relevant name - Set ownership and permissions on the .htaccess files below:
chown apache:apache /www/WebSiteA/wordpress/.htaccess
chown apache:apache /www/WebSiteA/wordpress/wp-content/.htaccess
chown apache:apache /www/WebSiteA/wordpress/wp-includes/.htaccess
chmod 644 /www/WebSiteA/wordpress/.htaccess
chmod 644 /www/WebSiteA/wordpress/wp-content/.htaccess
chmod 644 /www/WebSiteA/wordpress/wp-includes/.htaccess
Note: Replace WebSiteA with the relevant name - Create using VI the file /www/WebSiteB/wordpress/.htaccess and add the following content:
# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPress
Header set X-XSS-Protection "1; mode=block"
Header set X-Content-Type-Options nosniff
Header set Content-Security-Policy "default-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:"
Note: Replace WebSiteB with the relevant name - Create using VI the file /www/WebSiteB/wordpress/wp-content/.htaccess and add the following content:
# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPress
Note: Replace WebSiteB with the relevant name - Create using VI the file /www/WebSiteB/wordpress/wp-includes/.htaccess and add the following content:
# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPress
Note: Replace WebSiteB with the relevant name - Set ownership and permissions on the .htaccess files below:
chown apache:apache /www/WebSiteB/wordpress/.htaccess
chown apache:apache /www/WebSiteB/wordpress/wp-content/.htaccess
chown apache:apache /www/WebSiteB/wordpress/wp-includes/.htaccess
chmod 644 /www/WebSiteB/wordpress/.htaccess
chmod 644 /www/WebSiteB/wordpress/wp-content/.htaccess
chmod 644 /www/WebSiteB/wordpress/wp-includes/.htaccess
Note: Replace WebSiteB with the relevant name - Remove default content from the first WordPress site:
rm -f /www/WebSiteA/wordpress/license.txt
rm -f /www/WebSiteA/wordpress/readme.html
rm -f /www/WebSiteA/wordpress/wp-config-sample.php
rm -f /www/WebSiteA/wordpress/wp-content/plugins/hello.php - Remove default content from the second WordPress site:
rm -f /www/WebSiteB/wordpress/license.txt
rm -f /www/WebSiteB/wordpress/readme.html
rm -f /www/WebSiteB/wordpress/wp-config-sample.php
rm -f /www/WebSiteB/wordpress/wp-content/plugins/hello.php - Edit using VI the file /etc/httpd/sites-available/websitea.com.conf
Replace the value of the string, from:
DocumentRoot /www/WebSiteA
To:
DocumentRoot /www/WebSiteA/wordpress
Replace the value of the string, from:
<Directory /www/WebSiteA>
To:
<Directory /www/WebSiteA/wordpress>
Note: Replace WebSiteA with the relevant name - Edit using VI the file /etc/httpd/sites-available/websiteb.com.conf
Replace the value of the string, from:
DocumentRoot /www/WebSiteB
To:
DocumentRoot /www/WebSiteB/wordpress
Replace the value of the string, from:
<Directory /www/WebSiteB>
To:
<Directory /www/WebSiteB/wordpress>
Note: Replace WebSiteB with the relevant name - Restart the Apache service:
systemctl restart httpd.service
- Open a web browser from a client machine, and enter the URL bellow:
http://Server_FQDN/wp-admin/install.php
Note: Replace Server_FQDN with the relevant DNS name - Select language and click Continue
- Specify the following information:
- Site Title
- Username – replace the default “admin”
- Password
- E-mail
- Click on “Install WordPress” button, and close the web browser.
- Change ownership and permissions on the files and folders below:
chown -R apache:apache /www/WebSiteA/wordpress
find /www/WebSiteA/wordpress/ -type d -exec chmod -R 755 {} \;
find /www/WebSiteA/wordpress/ -type f -exec chmod -R 644 {} \;
chmod 400 /www/WebSiteA/wordpress/wp-config.php
chown apache:apache /www/WebSiteA/config.php
chmod 644 /www/WebSiteA/config.php
Note: Replace WebSiteA with the relevant name - Change ownership and permissions on the files and folders below:
chown -R apache:apache /www/WebSiteB/wordpress
find /www/WebSiteB/wordpress/ -type d -exec chmod -R 755 {} \;
find /www/WebSiteB/wordpress/ -type f -exec chmod -R 644 {} \;
chmod 400 /www/WebSiteB/wordpress/wp-config.php
chown apache:apache /www/WebSiteB/config.php
chmod 644 /www/WebSiteB/config.php
Note: Replace WebSiteB with the relevant name - Download “WordPress Firewall” plugin from:
http://www.seoegghead.com/software/wordpress-firewall.seo
- Copy the “WordPress Firewall” plugin file “wordpress-firewall.php” using PSCP (or SCP) into /www/WebSiteA/wordpress/wp-content/plugins
Note: Replace WebSiteA with the relevant name - Copy the “WordPress Firewall” plugin file “wordpress-firewall.php” using PSCP (or SCP) into /www/WebSiteB/wordpress/wp-content/plugins
- Open a web browser from a client machine, and enter the URL bellow:
http://Server_FQDN/wp-login.php
Note: Replace Server_FQDN with the relevant DNS name - From WordPress dashboard, click on “settings” -> make sure that “Anyone can register” is left unchecked -> put a new value inside the “Tagline” field -> click on “Save changes”.
- From the left pane, click on Plugins -> Add New -> search, install and activate the following plugins:
- Acunetix WP Security
- Antispam Bee
- WP Limit Login Attempts
- Login LockDown
- WP Security Audit Log
- From the list of installed plugins, locate and activate the Firewall plugin
- From the upper pane, click on “Log Out”.
- Delete the file /wp-admin/install.php
- WordPress 5.x installation phase
- SSL Configuration Phase
- Login using privileged account
- To add support for SSL certificates, run the command below:
yum install mod_ssl -y
- Run the command below to change the permissions on the certificates folder:
chmod 700 /etc/pki/CA/private
- Run the command bellow to generate a key pair for the first WordPress site:
openssl genrsa -des3 -out /etc/pki/CA/private/websitea-server.key 2048
Note 1: Specify a complex pass phrase for the private key (and document it)
Note 2: Replace websitea with the relevant name - Run the command bellow to generate a key pair for the second WordPress site:
openssl genrsa -des3 -out /etc/pki/CA/private/websiteb-server.key 2048
Note 1: Specify a complex pass phrase for the private key (and document it)
Note 2: Replace websiteb with the relevant name - Run the command bellow to generate the CSR for the first WordPress site:
openssl req -new -newkey rsa:2048 -nodes -sha256 -keyout /etc/pki/CA/private/websitea-server.key -out /tmp/websitea-apache.csr
Note 1: The command above should be written as one line.
Note 2: Replace websitea with the relevant name - Run the command bellow to generate the CSR for the second WordPress site:
openssl req -new -newkey rsa:2048 -nodes -sha256 -keyout /etc/pki/CA/private/websiteb-server.key -out /tmp/websiteb-apache.csr
Note 1: The command above should be written as one line.
Note 2: Replace websiteb with the relevant name - Edit using VI the file /etc/httpd/sites-available/websitea.com.conf and add the following:
<VirtualHost *:443>
ServerAdmin admin@websitea.com
ServerName www.websitea.com
ServerAlias websitea.com
DocumentRoot /www/WebSiteA/wordpress
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /www/WebSiteA/wordpress>
Options Indexes FollowSymLinks MultiViews
AllowOverride all
Require all granted
Order allow,deny
Allow from all
<LimitExcept GET POST>
deny from all
</limitexcept>
</Directory>
SSLCertificateFile /etc/ssl/certs/websitea.crt
SSLCertificateKeyFile /etc/pki/CA/private/websitea-server.key
SSLCipherSuite EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:DH+AES:ECDH+3DES:DH+3DES:RSA+AES:RSA+3DES:!ADH:!AECDH:!MD5:!DSS:!aNULL:!EDH:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
SSLHonorCipherOrder On
# Disable SSLv2 and SSLv3
SSLProtocol ALL -SSLv2 –SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2
# Disable SSL Compression
SSLCompression Off
SSLEngine on
ErrorLog /www/WebSiteA/logs/ssl_error.log
CustomLog /www/WebSiteA/logs/ssl_access.log combined
</VirtualHost>
Note: Replace WebSiteA with the relevant name - Edit using VI the file /etc/httpd/sites-available/websiteb.com.conf and add the following:
<VirtualHost *:443>
ServerAdmin admin@websiteb.com
ServerName www.websiteb.com
ServerAlias websiteb.com
DocumentRoot /www/WebSiteB/wordpress
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /www/WebSiteB/wordpress>
Options Indexes FollowSymLinks MultiViews
AllowOverride all
Require all granted
Order allow,deny
Allow from all
<LimitExcept GET POST>
deny from all
</limitexcept>
</Directory>
SSLCertificateFile /etc/ssl/certs/websiteb.crt
SSLCertificateKeyFile /etc/pki/CA/private/websiteb-server.key
SSLCipherSuite EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:DH+AES:ECDH+3DES:DH+3DES:RSA+AES:RSA+3DES:!ADH:!AECDH:!MD5:!DSS:!aNULL:!EDH:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
SSLHonorCipherOrder On
# Disable SSLv2 and SSLv3
SSLProtocol ALL -SSLv2 –SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2
# Disable SSL Compression
SSLCompression Off
SSLEngine on
ErrorLog /www/WebSiteB/logs/ssl_error.log
CustomLog /www/WebSiteB/logs/ssl_access.log combined
</VirtualHost>
Note: Replace WebSiteB with the relevant name - Edit using VI the file /etc/httpd/conf.d/ssl.conf and comment the following commands:
<VirtualHost _default_:443>
ErrorLog logs/ssl_error_log
TransferLog logs/ssl_access_log
LogLevel warn
SSLEngine on
SSLProtocol all -SSLv2 -SSLv3
SSLCipherSuite HIGH:3DES:!aNULL:!MD5:!SEED:!IDEA
SSLCertificateFile
SSLCertificateKeyFile - Restart the Apace service, run the command below:
systemctl restart httpd
- Run the commands below to enable HTTPD rule on the firewall:
firewall-cmd --zone=public --add-service=https --permanent
systemctl restart firewalld - Run the command below to change the permissions on the certificates folder:
chmod 600 /etc/pki/CA/private
- In-case the server was configured with SSL certificate, add the following line to the /www/WebSiteA/config.php file:
define('FORCE_SSL_LOGIN', true);
Note: Replace WebSiteA with the relevant name - In-case the server was configured with SSL certificate, add the following line to the /www/WebSiteB/config.php file:
define('FORCE_SSL_LOGIN', true);
Note: Replace WebSiteB with the relevant name
- WordPress upgrade process
- Run the commands below to change the SELinux permissions:
semanage fcontext -a -t httpd_sys_rw_content_t "/www/WebSiteA/wordpress(/.*)?"
restorecon -F -R -v /www/WebSiteA/wordpress
chcon -R -t httpd_sys_rw_content_t /www/WebSiteA/wordpress
Note: Replace WebSiteA with the relevant name - Login to the WordPress admin portal:
http://Server_FQDN/wp-login.php
Note: Replace Server_FQDN with the relevant DNS name - When prompted, select to upgrade the WordPress
- Once the upgrade process completes successfully, log off the WordPress admin portal
- Run the commands below to change the SELinux permissions:
semanage fcontext -a -t httpd_sys_content_t "/www/WebSiteA/wordpress(/.*)?"
restorecon -F -R -v /www/WebSiteA/wordpress
chcon -R -t httpd_sys_content_t /www/WebSiteA/wordpress
semanage fcontext -a -t httpd_sys_rw_content_t "/www/WebSiteA/wordpress/wp-content(/.*)?"
restorecon -F -R -v /www/WebSiteA/wordpress/wp-content
chcon -R -t httpd_sys_rw_content_t /www/WebSiteA/wordpress/wp-content
Note: Replace WebSiteA with the relevant name - Logoff the SSH console
- Check your site on the following test sites
-
- https://www.ssllabs.com/ssltest/
-
- https://dnsflagday.net/
-
- https://securityheaders.com/
-
- https://search.google.com/test/mobile-friendly
Cold War Tech: It’s Still Here, And Still Being Used
I’m a Cold War kid. I grew up watching news of Pershing II and SS-20 deployments in Europe, the Soviet war in Afghanistan, with some Terminator and Top Gun VHS action on the side. Yugoslavia was trying to play both sides, and for a while it worked like a charm. It all crashed a couple of years after the Berlin Wall came tumbling down, rendering our unaligned prowess pointless.
I admit this is an odd intro for a tech blog, but bear with me; it will start to make sense. Unlike most Europeans, we had good relations with both blocs. We sold tanks to Kuwait and rocket artillery to Saddam, we bought cheap fuel and MiGs from the Soviets, and in return we exported some stuff they couldn’t get directly from the West. I know people who would stay in East Berlin hotels because they were cheaper, then cross the border into West Berlin to work, play and shop, only to cross back via virtually unused border crossings like Checkpoint Charlie, all in a matter of hours.
On one such trip, my dad got me a Commodore C64, which was pressed into service as our Cold War gaming machine. Most 80s video games, and indeed a lot of music and films, were inspired by countless proxy wars and the threat of a nuclear apocalypse. As the Wall came down, a lot of people assumed that would be the end of runaway defence spending and that the world would be a safer place. It didn’t exactly work out that way, did it?
However, the long-term effect of the Cold War on science and technology is more profound than Nena’s 99 Luftbalons, or any Oliver Stone Vietnam flick.
Minuteman: A Cold War Tech Case Study
If you are reading this, you’re already using a technology developed for cold warriors; The Internet. That’s not all. A lot of tech and infrastructure we take for granted was developed, or at least conceived, during these tumultuous decades.
That constellation of GPS satellites orbiting Earth? It wasn’t put up there to geotag selfies or get an Uber ride; it was designed to help the US Strategic Air Command deliver hundreds of megatons worth of instant sunshine on Soviet targets with pinpoint accuracy. Integrated circuits, transistors, solid-state computing? Yep, all developed for the armed forces and paid for by the US taxpayer.
Here is just one example: the sleek and unfathomably deadly LGM-30 Minuteman intercontinental ballistic missile (ICBM). It wasn’t the first ICBM out there, but when it appeared, it was revolutionary. It was a solid fuel missile, which meant it could respond to a threat and launch in a minute without having to be fuelled, hence the name. But solid fuel was only part of the story: Solid-state was a lot more interesting from a geek perspective. Prior to Minuteman, ICBMs relied on analogue computers with mechanical gyros and primitive sensors. Since they were wired to a specific target, the target package could not be changed easily. Minuteman was the first mass implementation of a general purpose digital computer; it integrated an autopilot and missile guidance system in one package, with reliable storage that could take the stress of a silo launch. The computer was also capable of storing multiple targets, and was reprogrammable.
Transistors were nothing new at that point; they were developed years before by Bell Labs. Yes, these primitive transistors were almost exclusively reserved for the military-industrial complex. Uncle Sam was the sole customer for virtually all early computers and chips, burning heaps of money. These early transistors offered a quantum leap over vacuum tubes, but they weren’t perfect. By today’s standards, they were rubbish. The reliability simply wasn’t there, and if you needed to launch a few hundred thermonuclear warheads halfway across the planet, you sort of needed a guidance system that wouldn’t fail as soon as the candle was lit.
So what do you do when you come across a technical problem you can’t solve with money? Simple: You throw more money at it, and that’s exactly what the US Air Force did. They burned millions to make the damn things reliable enough to be used in harsh environments and survive the stress of a high-G ascent to space. This was known as the Minuteman High Reliability (Hi-Rel) programme.
It worked, but the USAF got a bit more than they bargained for. In trying to improve a single weapons system, the USAF ended up giving a huge boost to the tech industry in general. Eventually, the Minuteman was upgraded to include a new microchip-based guidance system, with a primitive form of solid-state storage. This Cold War relic has been in service since the Kennedy administration, and the current incarnation has been around for 45 years, receiving multiple hardware and software updates over the years.
So, in outlining the development and evolution of a single strategic weapon delivery system, I have touched on a number of vital technologies we take for granted: reliable transistors, chips, solid-state storage, mass-produced programmable computers and so on. The Minuteman was also the first mobile digital computer.
Some may argue that the legacy of such weapons is that Mutually Assured Destruction (MAD), guaranteed by the nuclear triad, kept superpowers from going to all-out war. It probably did, but in doing so, it also allowed engineers around the world to develop technologies and concepts applicable in various industries and fields of study.
Their real legacy lies in every integrated circuit on the planet.
Capitalist Pioneers Try To Cash In
What could be more capitalist than monetizing instruments of mass murder? The taxpayers paid for their development, not venture capitalists!
Joking aside, it can be argued that the Red Scare of the fifties created Silicon Valley. Most of the money really did come from taxpayers, and most companies that got lucrative defence contracts were quick to make a buck on dual-use technology developed for the military. Remember Bell Labs? A few of their brightest people went on to co-found Fairchild Semiconductor, and eventually created Intel a decade later. The updated Minuteman guidance computer was based on chips from another semiconductor giant: Texas Instruments.
I am not disputing the brilliance of people like Intel co-founders Robert Noyce and Gordon Moore. I have no doubt they would have made their mark on the tech industry even without the biggest arms race in history, but it’s also hard to dispute that the tech industry wouldn’t have developed at nearly the same pace had there been no government funding. Yes, the taxpayers effectively subsidised the tech industry for decades, but in the long run, they’re probably better off. Westinghouse did not need subsidies to develop washing machines and refrigerators, because consumer demand was strong, but in the early days of computing, there was virtually no consumer demand. That’s why governments had to step in.
But what did the taxpayer get?
The space and arms race spawned a number of technologies that in turn created countless business opportunities. Even primitive computers had a profound impact on industry. They made energy grids and transportation infrastructure more efficient, helped improve safety in industrial facilities, including sensitive chemical and nuclear facilities, they changed the face of banking, communications, entertainment and so on.
Best of all, we somehow managed not to blow ourselves up with the weapons these technologies made possible, yet at the same time, we turned swords into ploughshares. Back in the fifties, the US and USSR launched initiatives designed to examine civilian uses of nuclear power (including civil engineering nuclear explosives schemes, which went terribly wrong), but they amounted to nothing. It wasn’t the might of the atom that changed the world, it was the humble microchip and ancillary technologies developed for countless defence programmes.
Before they made their mark in science and beat Gary Kasparov at the chess table, supercomputers and their analogue predecessors were used to simulate physical processes vital in the development of thermonuclear weapons. An advantage in sheer computing power could yield advances in countless fields. Computer simulations allowed western navies to develop quieter submarines with new screws, digitally optimised to avoid cavitation. Digital signal processors (DSPs) made sonars far more sensitive, and a couple of decades later, advanced DSPs made music sound better. Computer aided design wasn’t just used to reduce the radar cross-section of airplanes, it also made our buildings and cars cheaper, safer and more energy efficient.
Some of these efforts resulted in a technological dead-end, but most did not. One of my favourite tech duds was Blue Peacock, a British nuclear landmine (yes, landmine, not bomb), weighing in at 7.2 tons. Since it relied on early 50s technology and had to be buried in the German countryside, the engineers quickly realised the cold could kill the electronics inside, so they tried to figure out how to keep circuits warm. Their solution was so outlandish that it was mistaken for an April Fool’s Day joke when the design was declassified on April 1, 2004.
A chicken was to be sealed inside the casing, with enough food and water to stay alive for a week. Its body heat would keep the bomb’s electronics operational.
As civilian industries started implementing these cutting edge technologies en masse, our quality of life and productivity shot up exponentially. Our TVs, cars, phones, the clothes we wear, and just about any consumer product we buy: They’re all better thanks to the biggest waste of money in history. Granted, we all have trace amounts of Strontium 90 in our bones, but in the big scheme of things, it’s a small price to pay for the high-tech world we enjoy so much.
Oh yes, we also got video games out of it. Loads and loads of video games.
Kickstarting Game Development
Video games were pioneered on the earliest digital computers (and some analogue ones as well). In fact,Tennis for Two, arguably the first game to use a graphical display, was developed for an analogue computer in 1958. However, not even Bond villains had computers at that point, so the rise of the video game industry had to wait for hardware to mature.
By the mid to late seventies, microchips became cheap enough for mass market applications. Now that we had the hardware, we just needed some software developers and a use-case for cheap chips. Since the average consumer was not interested in expensive and complicated computers that were designed for big business, attention shifted to gaming; arcades, game consoles and inexpensive computers like the ZX and C64.
These humble machines brought programmable computers to millions of households, hooking a generation of kids on digital entertainment, and creating opportunities for game developers. Consoles and cheap computers brought the arcade to the living room, ushering in a new era of video gaming, and creating countless jobs in the industry. Even the Soviets got in on it with Tetris, the first game from behind the iron curtain.
It wasn’t just entertainment. Unlike consoles, the ZX and C64 were proper computers, and geeky kids quickly found new uses for them. They started making demos, they started coding. Chances are you know a lot of these kids, and if you’re reading this, you probably work with some of them.
If you’re interested in the development of early video games, and what the Cold War had to do with them, I suggest you check out Nuclear Fruit; a new documentary that’s a must see for all geeks and gamers born in the 70s and early 80s.
These guys and gals went on to develop a new breed of video games, build successful online businesses, create new technologies and revolutionise the digital world, all in the space of a decade. A generation that grew up with the constant threat of nuclear war, enjoying dystopian science fiction, helped make the world a better place. They didn’t develop Skynet, they developed millions of mobile and web apps instead.
So, no Terminators. At least, not yet.
Cold War 2.0 And The Emergence Of New Threats
This is not a geopolitical blog, but if you happen to follow the news, you probably know the world is a messed up place. No, the end of the Cold War didn’t bring an era of peace and stability, and there’s already talk of a “Second Cold War,” or worse, a “hot” war. While most of these worries are nothing more than hype and sensationalism, a number of serious threats remain. The threat of nuclear annihilation is all but gone, but the technology we love so much has created a host of potential threats and issues, ranging from privacy and security, to ethical concerns.
Thankfully, we aren’t likely to see an arms race to rival the one we witnessed in the 20th Century, but we don’t have to. The same technology that makes our lives easier and more productive can also be used against us. The digital infrastructure we rely on for work and play is fragile and can be targeted by criminals, foreign governments, non-state actors, and even lone nutjobs with a grudge.
These new threats include, but are not limited to:
- Cybercrime
- State-sponsored cyber warfare
- Misuse of autonomous vehicle technology
- Privacy breaches
- Mass surveillance abuses
- Use of secure communications for criminal/terrorist activities
All pose a serious challenge and the industry is having trouble keeping up. My argument is simple: We no longer have to develop ground-breaking technology to get an edge in geopolitical struggles, but we will continue to develop technologies and methods of tackling new threats and problems. It’s a vicious circle since these new threats are made possible by our reliance on digital communications and the wide availability of various technologies that can be employed by hostile organisations and individuals.
Cybercrime is usually associated with identity theft and credit card fraud, but it’s no longer limited to these fields. The advent of secure communication channels has allowed criminals to expand into new niches. The scene has come a long way since the romanticised exploits of phone phreaks like Steve Wozniak. Some offer hacking for hire, others are willing to host all sorts of illicit content, no questions asked. Some groups specialise in money laundering, darknet drug bazaars, and so on. The biggest threat with this new generation of cybercrime is that you no longer have to possess many skills to get involved. As cybercrime matures, different groups specialise in different activities, and they can be hired.
State-sponsored cyberwarfare poses a serious threat to infrastructure, financial systems, and national security. However, there is really not much an individual can do in the face of these threats, so there’s no point in wasting time on them in this post. Another form of economic warfare could be to deprive a nation or region of Internet access. It has happened before, sometimes by accident, sometimes by government decree and enemy action.
Commercial drones don’t have much in common with their military counterparts. Their range and payload are very limited, and while a military drone can usually loiter over an area for hours on end, the endurance of hobbyist drones is limited to minutes rather than hours. This does not mean they cannot be used for crime; they can still invade someone’s privacy, smuggle drugs across a border, or even carry explosives. Autonomous cars are still in their infancy, so I don’t feel the need to discuss the myriad of questions they will raise.
Privacy remains one of the biggest Internet-related concerns expressed by the average person. This is understandable; we have moved so much of our daily lives to the digital sphere, placing our privacy at risk. People don’t even have to be specifically targeted to have their privacy and personal integrity compromised. Most data that makes its way online is released in the form of massive dumps following a security breach affecting many, if not all, users of a particular online service. People will continue to demand more privacy, and in turn clients will demand more security from software engineers (who aren’t miracle workers and can’t guarantee absolute security and privacy).
Mass surveillance is usually performed by governments and should not represent a threat to the average citizen or business. However, it’s still a potential threat as it can be abused by disgruntled workers, foreign governments, or by way of data breaches. The other problem is the sheer cost to the taxpayer; mass surveillance doesn’t come cheap and we will continue to see more of it.
Most governments wouldn’t bother with mass surveillance and metadata programmes if they weren’t facing very real threats. The same technology developed to keep our communications and online activities private can be abused by all sorts of individuals we wouldn’t like to meet in a dark alley. The list includes multinational crime syndicates, terrorists and insurgents. However, not all of this communication needs to be encrypted and secure. The point of propaganda is to make it widely available to anyone, and the Internet has given every crackpot with a smartphone the biggest megaphone in history, with global reach, free of charge. You can use the Internet to rally a million people around a good cause in a matter of days, but the same principles can be applied to a bad cause. If the target audience is people willing to join a death cult with a penchant for black flags, you don’t need a million people; you just need a few dozen.
The Difference Between Science And Science Fiction
For all their brilliance, the science fiction authors who helped shape popular culture in the 20th Century didn’t see the real future coming. They didn’t exactly envision the Internet, let alone its profound impact on society.
Sorry to burst your bubble, but Terminators and Artificial Intelligence (AI) aren’t a threat yet, and won’t be anytime soon. The real threats are more down to earth, but that does not mean we can afford to ignore them. You don’t need a Terminator to create havoc, all you need is a few lines of really nasty code that can disrupt the infrastructure, causing all sorts of problems. You don’t need a super-intelligent automaton from the future to cause damage. Since eBay doesn’t carry Terminators, it’s a lot easier to use an off-the-shelf drone, programmed to deliver a payload to a specific target: drugs to a trafficker, or an explosive charge to a VIP.
But these aren’t the biggest threats, they’re just potential threats: something for a Hollywood script, not a tech blog.
The real threats are criminal in nature, but they tend to stay in the cyber realm. You don’t have to physically move anything to move dirty money and information online. Law enforcement is already having a hard time keeping up with cybercrime, which seems to be getting worse. While it’s true that the crime rate in developed countries is going down, these statistics don’t paint the full picture. A few weeks ago, the British Office for National Statistics (ONS) reported a twofold increase in the crime rate for England and Wales, totalling more than 11.6 million offences. The traditional crime rate continued to fall, but the statistics included 5.1 million online fraud incidents.
The cost of physical crime is going down, but the cost of cybercrime is starting to catch up. I strongly believe the industry will have to do more to bolster security, and governments will have to invest in online security and crime prevention as well.
Just in case you are into dystopian fiction and don’t find criminal threats exciting, another frightening development would be data monopolisation: A process in which industry giants would command such a competitive lead, made possible by their vast user base, as to render competition pointless, thus stifling innovation.
Yes, I am aware that Terminators would make for a more eventful future and interesting blog post, but we’re not there yet.
Source: Toptal
What To Look Out For In Software Development NDAs
The demand for technical talent, and the ease with which information can be shared, has increased entrepreneurs’ reliance on business relationships with outsiders. It has never been easier for an entrepreneur to find, meet, communicate and eventually enter into some sort of business relationship with an individual or company that is otherwise not associated with the business.
Moreover, sky-high valuations and fairytale overnight success stories have fueled the notion that even a basic idea can be worth millions, if not billions, in a relatively short time. In light of these factors, you might presume that Non-Disclosure Agreements (NDAs) have been widely accepted in the tech world as a means to protect sensitive and potentially valuable information from theft and abuse. Not so fast.
Before jumping into the debate, though, it helps to have a quick understanding of what an NDA is, what one looks like and, eventually, what to look out for if you’re asked to sign one as a freelance software developer.
What Is A Non-Disclosure Agreement?
An NDA is exactly what its name implies — a legal agreement between two or more parties that (i) defines certain confidential information that will be disclosed and (ii) imposes a legal obligation on the receiving party to keep that information confidential. NDAs are most commonly used when a business relationship between two companies or individuals requires the sharing of confidential information.
For example:
Company A, a local retailer, has hired ABC IT Co. to build an online inventory and order management system. To build the system, Company A must provide ABC IT Co. with a list of Company A’s suppliers and certain pricing information. Before disclosing its supplier list and pricing information, Company A asks ABC IT Co. to sign an NDA forbidding ABC IT Co. from disclosing or using Company A’s confidential information.
If a party to an NDA breaches the agreement, by disclosing or using confidential information for example, the other party to the NDA may sue the breaching party for monetary damages (compensation for lost profits or business), injunctive relief (a court order requiring the breaching party to refrain from taking some action) or specific performance (a court order requiring that the breaching party take some specified action).
So What Do Software Development NDAs Look Like?
NDAs are negotiated legal agreements that can be as simple or as complex as the parties desire. An NDA can be a one page fill-in-the-blank form or a lengthy document drafted from scratch to reflect the unique circumstances of the parties’ relationship, the different negotiating leverage of each party, and the nature of the information that will be disclosed.
Although there is no such thing as a one-size-fits-all software NDA, for purposes of this overview, and to understand generally how NDAs work, it’s important to appreciate the three “main-event” provisions that are common to all NDAs.
(a) The Definition of Confidential Information:
The definition of “Confidential Information” will set forth the type of disclosed information that is subject to the limitations on use and disclosure and, importantly, the type of disclosed information that is not subject to such limitations.
(b) The Term of the Recipient’s Obligations
The term of an NDA sets forth the time limit on the parties’ obligations. The term of an NDA may be measured in days, weeks, months or years depending on the circumstances of the relationship and the nature of the disclosed information.
(c) The Limitation on Use and Disclosure:
This provision will describe what a recipient party may do and what a recipient party may not do with disclosed information that falls within the definition of Confidential Information. This provision will almost certainly forbid disclosure of Confidential Information, but may also limit the use of Confidential Information and, in some cases, require that the recipient take certain affirmative steps to protect the confidentiality of Confidential Information.
The NDA Debate: Should You Ask For An NDA? Should You Sign One?
Although NDAs have been around for as long as there has been information worth protecting, the high-tech startup boom has thrust their use into the limelight and sparked a debate as to their value. As an industry that is highly dependent on data and constantly evolving technology, one would think that the high-tech startup world would embrace the use of software development NDAs. To understand why that isn’t the case, and to better gauge whether you should ask for an NDA or sign one presented to you, consider the following:
NDAs Are Often Unilateral
NDAs are unilateral when the business relationship requires that only one party disclose confidential information (rather than a mutual exchange of information by each party).
A startup seeks to hire an engineer to build its mobile app and has asked the engineer to sign an NDA. The startup will disclose information to the engineer, but the relationship does not require the engineer to provide confidential information to the startup. The NDA will be unilateral and will impose legal obligations, and potential liability, on the engineer only.
Because only one party is exchanging confidential information, only one party (the recipient party) has a legal obligation to comply with and, as such, only the recipient party is subject to potential liability. What an entrepreneur might view as a means by which to protect an idea, an NDA recipient might view as a one-sided contract.
Entrepreneurs Often Overstate the Need for an NDA
There are surely circumstances where NDAs make sense. Customer lists, pricing information, proprietary formulas and algorithms might have intrinsic value that is best protected by an NDA. Many argue, however, that some entrepreneurs are NDA trigger happy and think that every idea is worthy of legal protection. Ideas though, it is argued, are rarely new and, moreover, often have no value without execution.
NDAs should be asked for only when there is something worth protecting, and many argue that an idea alone does not warrant asking for an NDA. Finally, those most often asked to sign software NDAs – investors and engineers – rarely have any interest in stealing an idea when doing so would likely ruin any professional goodwill and reputation they’ve earned in their respective professional communities.
NDAs Indicate Mistrust
In a perfect world, business would be business and would never be personal. In reality, though, business is often about perception. What might be “just a contract” to an entrepreneur asking for a software development NDA, may be perceived as an indication of mistrust and a questioning of personal integrity by the person being asked to sign one. NDAs are most often requested at the outset of a business relationship, signaling mistrust and calling into question one’s professional integrity may start the relationship off on the wrong foot—even if that wasn’t the intention…perception is powerful.
This issue is less of a concern for business relationships where both parties will be disclosing confidential information and, thus, an NDA will be bilateral and both parties subject to legal obligation. Outside of strategic joint ventures, partnerships, mergers and similar arrangements, however, bilateral exchanges are rare and unilateral NDAs are much more common.
NDAs Can Limit An Information Recipient’s Ability To Earn A Living
As discussed earlier, an NDA defines a set of information that is to be considered “Confidential Information” and then specifies what a recipient may and may not do with that information during the term of the NDA. Whether an NDA is three pages or three-hundred pages, no contract can predict and plan for every possible circumstance and this limitation often works against the recipient of disclosed information.
What if, after signing an NDA, an engineer is asked to build a similar product or to execute a similar but technically different idea? Is using similar code on a different application a violation of the NDA’s non-use provision? What if the engineer learned new skills during the engagement? Can the engineer use those skills for another client? Can the engineer list the client on his or her resume?
There is a real concern that signing even one NDA, whether as an engineer, an investor or otherwise, can drastically shrink one’s pool of potential business. At worst, signing an NDA might foreclose a person’s ability to work on even slightly related projects. At best, signing an NDA complicates future business development efforts as every new opportunity requires a time consuming analysis of conflicts and liability under each and every NDA that the person may be subject to.
Enforcement Isn’t Cheap
The whole point of entering into an NDA is to have some legal remedy if the recipient party discloses confidential information in violation of the agreement. An NDA gives a disclosing party a basis to file a lawsuit seeking money damages and/or a court order against the breaching party. What many NDA proponents don’t fully appreciate, however, is the cost of enforcement.
Filing a lawsuit can be extremely costly and time consuming. A lawsuit for breach of contract will very likely require hiring a lawyer to gather evidence, assess possible legal claims, file the initial complaint and supporting documents, depose the allegedly breaching party and any witnesses and related parties, and argue the case before a judge. Lawsuits can take years, and lawyers typically charge by the hour. Before asking for an NDA, one should assess whether the information to be protected is more valuable than the potential cost of enforcement.
Though the above factors have contributed to a move away from NDAs in the startup world, NDAs are not without their value. Whether you should ask for an NDA before disclosing information, or agree to sign one if you’re on the receiving end of the equation, depends on the particular circumstances of the intended business relationship and each party’s motivation to enter into the relationship. The more valuable the relationship is to a party, the less leverage that party has to negotiate for or against the use of an NDA. The less valuable the relationship is to a party, the more leverage that party has to get its way or walk away. This push and pull is at the heart of all negotiations, the party with the better “Best Alternative to a Negotiated Agreement (BATNA)” has the upper hand.
If You Must Have An NDA…
So what if you’re an engineer and the opportunity to work on a particular project outweighs the risk of signing a software development NDA? What if you’re a startup and the intrinsic value of your information justifies the need for an NDA, despite the difficulty of finding an engineer that will sign one? If you have to sign an NDA, or if you must ask for one, what are some of the things to look out for and consider?
As is always the case, I strongly suggest seeking the guidance of a competent and licensed attorney.Contracts can get complex quickly and legal rights and obligations shouldn’t be left to “winging it.” As you’re finding an attorney, though, you can start by reviewing the some of the NDA’s main operative provisions. The following are a few preliminary things you might consider when presented with or requesting an NDA:
1. Definition of Confidential Information
Recall that this provision defines the type of disclosed information that is subject to the confidentiality obligations of the NDA and, as such, it should reflect the nature of the business relationship and that of the information to be disclosed.
If you’re a disclosing party, you’ll likely ask for a broad definition of Confidential Information to cover everything that might be disclosed to the receiving party during the course of the relationship. If you’re a receiving party, however, you might resist this request and seek instead to narrow the definition to include only specifically designated information such as, for example, written information that is marked “Confidential.” Regardless of where the negotiations come out, the parties should think carefully about striking the right balance between a definition of Confidential Information that is too broad (and thus extremely restrictive to the recipient party), on the one hand, and, on the other hand, too narrow (thus minimizing the protective effect to the disclosing party).
Though it’s important to determine the information that is to be held in confidence, it is equally important to “carve-out” certain information that is not subject to the confidentiality provisions. Common examples of such carve-outs include information that is or becomes publicly available and information that is lawfully known before entry into the business relationship.
2. Term of Confidentiality
The term of an NDA should reflect the nature of the parties’ business relationship and the nature of the information to be disclosed. If the relationship is limited to a one-year engagement, it might not make sense for the term of the NDA to extend too far after termination of the relationship. Similarly, certain types of information become less valuable or sensitive over time. Financial statements, for example, may be particularly valuable at and immediately after the time they are prepared, but probably don’t accurately reflect a company’s financial health months or years after their preparation. If information is of a type that decreases in value or sensitivity over time, a long term is likely not necessary.
3. Disclosure to Representatives
As discussed throughout this article, NDAs are typically signed by a single disclosing party and a single recipient party. The problem, though, is that a recipient party may not always work alone and, rather, may from time to time need to disclose information protected by an NDA to such recipient party’s colleagues, employees or representatives in order to carry out the terms of the business relationship.
David Developer has signed an NDA with BigCo to create a mobile app for BigCo.
During the project, David needs to enlist the help of his colleague, Peter Programmer, to write some code in a language that David is less familiar with. Peter has not signed an NDA, can David disclose information to Peter so that Peter can assist with the project?
Rather than go through the hassle of signing a new NDA for each new person to whom information needs to be disclosed during the course of a project, or trying to predict ahead of time every person to whom information may need to be disclosed, the parties to an NDA may include a representatives provision addressing permitted disclosures to certain defined persons.
The representatives provision is straightforward from a drafting perspective and is simply a definition of “Representatives” that specifies the persons or classes of persons to whom confidential information may be disclosed. A recipient party will likely want the definition to be broad and inclusive of any person with whom the recipient party may collaborate. The disclosing party, of course, will likely want to keep the definition of Representatives as narrow as possible to permit the project to move forward, on the one hand, while maintaining the protections of the software development NDA, on the other. Finally, the disclosing party will very likely wish to include a clause providing that, prior to any disclosure of confidential information to a Representative, the recipient party inform such Representative of the confidential nature of the information and of the terms of the NDA. A representatives clause may look something like the following:
During the Term of this Agreement, the Recipient Party will not disclose the Confidential Information to any person other than the Representatives, provided that, prior to any such disclosure to a Representative, the Recipient Party informs such Representative of the confidential nature of the information and the terms of this Agreement. “Representatives” shall include the employees, independent contractors, partners, agents and other third parties that are or may be engaged by the Recipient Party for purposes of the Project.
4. Non-Disclosure v. Non-Use
This is a big one. As mentioned earlier, NDAs will almost always include a prohibition on disclosure of Confidential Information. Some software NDAs, however, will also prohibit or limit use of Confidential Information. For example:
The Recipient Party agrees that, during the Term of this Agreement, the Recipient Party will not (i) disclose the Confidential Information to any person other than its Representatives and (ii) will not use the Confidential Information for any purpose other than for those purposes directly related to the Project.
Depending on the term of the NDA and the type of information disclosed, restriction on use may not be an issue. If the term is particularly long, however, or the definition of Confidential Information particularly broad, the “use prohibition” may be extraordinarily restrictive on the recipient party. For example, consider the following definition of Confidential Information:
“Confidential Information” includes (i) all information furnished by the Disclosing Party to the Recipient Party, whether furnished before or after the date of this Agreement, whether oral or written, and regardless of the manner in which it was furnished, and (ii) all analyses, compilations, forecasts, studies, interpretations, documents, code and similar work product prepared by the Recipient Party or its Representatives in connection with the Project.
What this means is that, for as long as the NDA is in effect, the Recipient Party cannot disclose or use anyinformation that the Disclosing Party made available to the Recipient Party or any information prepared in connection with the particular Project. Without any carve-outs or qualifications, these clauses could be incredibly limiting.
An engineer signs an NDA which includes the two provisions set out above. During the course of the Project, the engineer learns a new way of putting together common strings of code. The new method could be considered work product that was prepared in connection with the Project and, as such, the engineer may be prohibited from using the method in future projects during the term of the NDA.
Before hearing Startup A’s pitch, an investor signs an NDA which includes provisions similar to those set out above. During the pitch, Startup A reveals its most recent financial statements and its strategy for growth. The investor does not invest. A few months later, the investor is approached by a similar startup, Startup B, and asked to attend a pitch. The investor may be precluded from investing in Startup B as doing so might involve use of information learned during Startup A’s pitch, even if only remembered by the investor.
The above examples are admittedly extreme, but are used to stress the point that the combination of a broad definition of Confidential Information, an unnecessarily long term, and restrictions on use can be paralyzing. Additionally, these are by no means the only red-flags that can sneak into an NDA and what might be a red-flag for one NDA may be perfectly tolerable for a different business relationship.
So What Do I Do…Specifically?
Though you might now have a better understanding of what an NDA is, what a software development NDA might look like, and why many in the tech world are reluctant to sign them, you might still be wondering what, specifically, you should do when on the receiving end of an NDA. There is no substitute for the advice of a competent attorney, but, with an understanding of the concepts discussed in this article, you can approach the first read of an NDA armed with some knowledge as to what is most important to watch for:
- Is this a bilateral or unilateral NDA? Will both parties be disclosing information? If so, are the parties subject to identical limitations and requirements?
- How broad, or narrow, is the definition of Confidential Information?
- How long are the obligations in effect? Does the term of the NDA match the nature of the business relationship and the information to be disclosed?
- Am I only prohibited from disclosing the Confidential Information, or disclosing and using the Confidential Information?
- Am I permitted to disclose the information to my employees and colleagues who may assist with the project?
- Is this relationship valuable enough to assume a legal obligation that can be enforced in a court?
Finally, the above considerations, and this write-up generally, are not solely for the benefit of those who may be asked to sign an NDA. Certainly, a recipient party should consider very carefully an NDA’s provisions before signing, but a party considering asking for an NDA, too, would be wise to consider these factors.
NDAs, like most contracts, have the most value, and are therefore most likely to be signed, when both parties are comfortable with the balance of risks managed by the NDA and the benefit to be realized by the underlying contractual relationship. By considering the perspective of the recipient party, a party asking for an NDA may be better able to tailor the scope of an NDA to match the business relationship and present to the recipient party a fair and balanced agreement.
Though the information in this write-up should give you a good starting point, there is a lot to consider when asking for or presented with an NDA. A competent attorney can work with both parties to draft an NDA that is protective to the disclosing party, without being overly restrictive to the recipient party, and help move the parties towards a mutually beneficial business relationship.
If you want to learn more about legal issues faced by startups and developers, I suggest you check outStartup Law Hacks as well.
Disclaimer: the contents of this article were written and are made available solely as general information and for educational purposes and not to provide specific legal advice of any kind or to establish an attorney-client relationship. This article should not be used as a substitute for competent legal advice from an attorney licensed in your jurisdiction. This article has been written by Bret Stancil in his individual capacity and the views and opinions expressed herein are his own.
Source: Toptal
9 Essential System Security Interview Questions
- What is a pentest?
“Pentest” is short for “penetration test”, and involves having a trusted security expert attack a system for the purpose of discovering, and repairing, security vulnerabilities before malicious attackers can exploit them. This is a critical procedure for securing a system, as the alternative method for discovering vulnerabilities is to wait for unknown agents to exploit them. By this time it is, of course, too late to do anything about them.
In order to keep a system secure, it is advisable to conduct a pentest on a regular basis, especially when new technology is added to the stack, or vulnerabilities are exposed in your current stack.
2. What is social engineering?
“Social engineering” refers to the use of humans as an attack vector to compromise a system. It involves fooling or otherwise manipulating human personnel into revealing information or performing actions on the attacker’s behalf. Social engineering is known to be a very effective attack strategy, since even the strongest security system can be compromised by a single poor decision. In some cases, highly secure systems that cannot be penetrated by computer or cryptographic means, can be compromised by simply calling a member of the target organization on the phone and impersonating a colleague or IT professional.
Common social engineering techniques include phishing, clickjacking, and baiting, although several other tricks are at an attacker’s disposal. Baiting with foreign USB drives was famously used to introduce the Stuxnet worm into Iran’s uranium enrichment facilities, damaging the nation’s ability to produce nuclear material.
For more information, a good read is Christopher Hadnagy’s book Social Engineering: The Art of Human Hacking.
3. You find PHP queries overtly in the URL, such as /index.php=?page=userID
. What would you then be looking to test?
This is an ideal situation for injection and querying. If we know that the server is using a database such as SQL with a PHP controller, it becomes quite easy. We would be looking to test how the server reacts to multiple different types of requests, and what it throws back, looking for anomalies and errors.
One example could be code injection. If the server is not using authentication and evaluating each user, one could simply try /index.php?arg=1;system(‘id’)
and see if the host returns unintended data.
4. You find yourself in an airport in the depths of of a foreign superpower. You’re out of mobile broadband and don’t trust the WI-FI. What do you do? Further, what are the potential threats from open WI-FIs?
Ideally you want all of your data to pass through an encrypted connection. This would usually entail tunneling via SSH into whatever outside service you need, over a virtual private network (VPN). Otherwise, you’re vulnerable to all manner of attacks, from man-in-the-middle, to captive portals exploitation, and so on.
5. What does it mean for a machine to have an “air gap”? Why are air gapped machines important?
An air gapped machine is simply one that cannot connect to any outside agents. From the highest level being the internet, to the lowest being an intranet or even bluetooth.
Air gapped machines are isolated from other computers, and are important for storing sensitive data or carrying out critical tasks that should be immune from outside interference. For example, a nuclear power plant should be operated from computers that are behind a full air gap. For the most part, real world air gapped computers are usually connected to some form of intranet in order to make data transfer and process execution easier. However, every connection increases the risk that outside actors will be able to penetrate the system.
6. You’re tasked with setting up an email encryption system for certain employees of a company. What’s the first thing you should be doing to set them up? How would you distribute the keys?
The first task is to do a full clean and make sure that the employees’ machines aren’t compromised in any way. This would usually involve something along the lines of a selective backup. One would take only the very necessary files from one computer and copy them to a clean replica of the new host. We give the replica an internet connection and watch for any suspicious outgoing or incoming activity. Then one would perform a full secure erase on the employee’s original machine, to delete everything right down to the last data tick, before finally restoring the backed up files.
The keys should then be given out by transferring them over wire through a machine or device with no other connections, importing any necessary .p7s email certificate files into a trusted email client, then securely deleting any trace of the certificate on the originating computer.
The first step, cleaning the computers, may seem long and laborious. Theoretically, if you are 100% certain that the machine is in no way affected by any malicious scripts, then of course there is no need for such a process. However in most cases, you’ll never know this for sure, and if any machine has been backdoored in any kind of way, this will usually mean that setting up secure email will be done in vain.
7. You manage to capture email packets from a sender that are encrypted through Pretty Good Privacy (PGP). What are the most viable options to circumvent this?
First, one should be considering whether to even attempt circumventing the encryption directly. Decryption is nearly impossible here unless you already happen to have the private key. Without this, your computer will be spending multiple lifetimes trying to decrypt a 2048-bit key. It’s likely far easier to simply compromise an end node (i.e. the sender or receiver). This could involve phishing, exploiting the sending host to try and uncover the private key, or compromising the receiver to be able to view the emails as plain text.
8. What makes a script fully undetectable (FUD) to antivirus software? How would you go about writing a FUD script?
A script is FUD to an antivirus when it can infect a target machine and operate without being noticed on that machine by that AV. This usually entails a script that is simple, small, and precise
To know how to write a FUD script, one must understand what the targeted antivirus is actually looking for. If the script contains events such as Hook_Keyboard()
, File_Delete()
, or File_Copy()
, it’s very likely it wil be picked up by antivirus scanners, so these events are not used. Further, FUD scripts will often mask function names with common names used in the industry, rather than naming them things like fToPwn1337()
. A talented attacker might even break up his or her files into smaller chunks, and then hex edit each individual file, thereby making it even more unlikely to be detected.
As antivirus software becomes more and more sophisticated, attackers become more sophisticated in response. Antivirus software such as McAfee is much harder to fool now than it was 10 years ago. However, there are talented hackers everywhere who are more than capable of writing fully undetectable scripts, and who will continue to do so. Virus protection is very much a cat and mouse game.
9. What is a “Man-in-the-Middle” attack?
A man-in-the-middle attack is one in which the attacker secretly relays and possibly alters the communication between two parties who believe they are directly communicating with each other. One example is active eavesdropping, in which the attacker makes independent connections with the victims and relays messages between them to make them believe they are talking directly to each other over a private connection, when in fact the entire conversation is controlled by the attacker, who even has the ability to modify the content of each message. Often abbreviated to MITM, MitM, or MITMA, and sometimes referred to as a session hijacking attack, it has a strong chance of success if the attacker can impersonate each party to the satisfaction of the other. MITM attacks pose a serious threat to online security because they give the attacker the ability to capture and manipulate sensitive information in real-time while posing as a trusted party during transactions, conversations, and the transfer of data. This is straightforward in many circumstances; for example, an attacker within reception range of an unencrypted WiFi access point, can insert himself as a man-in-the-middle.
This article is from Toptal.
Rethinking Authentication And Biometric Security, The Toptal Way
Toptal is a vast network of tech talent and we currently boast the biggest distributed workforce in the industry. This is a source of pride for many Toptalers, especially our hard-working dev team. Why? Because we make it appear so easy and seamless, and we do it every single day. While a traditional tech company is bound to have a vast infrastructure (loads of office space, servers, standardized equipment, abundant physical and cyber security resources, and so on), we don’t.
We rely on off-the-shelf technology and services. Traditional companies struggle to cope with a small number of BYOD users, but here at Toptal, all our hardware is BYOD. The problem with our platform-agnostic approach and the reliance on a distributed network is self-evident: How can we ensure and maintain security?
It was never easy, but we like a good challenge, and like to stay one step ahead. That’s why we set about designing multiple authentication and onboarding procedures last year. We used the first quarter of 2016 for trials and pilots, and they were encouraging. As a result, we decided to announce the results of our trials and unveil our rollout plans.
By the end of the third quarter, all Toptalers will be acquainted with our new solutions, and if all goes well, they will start using them by the end of the year.
The Challenge
How do we make sure everyone logged onto our network is who they claim they are? Most of our team members have never met in real life, yet they collaborate on a daily basis. What if someone’s security has been compromised? Or, what if a disgruntled member decides to undermine the network?
We settled on a twofold approach to addressing these concerns:
- Including a set of personal reliability tests to our screening process.
- Introducing a new layer of biometric security.
What sort of tests will we institute? Our approach was inspired by the Personnel Reliability Program (PRP), created by the U.S. Department of Defense. The program is designed to identify personnel with the highest degree of reliability, taking into account their prior conduct, trustworthiness, behavior, and allegiance. PRP compliance will be evaluated continuously by our newly formed Internal Security Division (ISD), staffed by military intelligence veterans from Israel and Bosnia.
Platform access will be limited to individuals who meet stringent PRP criteria, however, failure to meet these standards will not be grounds for termination or demotion. It will merely reflect the individual’s lack of suitability for certain roles, restricting their access to confidential information.
To ensure continuous compliance, every Toptal member will be required to sign a new non-disclosure agreement and undergo evaluation. The agreement will include provisions covering the treatment of confidential information and outline a set of sanctions for individuals in violation of said agreement.
Since we are a distributed network, we will also rely on input from our members. Our existent monthly TopTeam reports will be expanded to include a personal reliability questionnaire. In other words, each network member will be able to report suspect coworkers or behavior via an anonymous evaluation form.
Lt. Col. David Finci, Head of Toptal’s Internal Security Division, explains the decision to include anonymous ‘tips’:
“Our goal is not to encourage dissent and create friction among team members, but we are convinced this is vital to ensuring personal reliability. We must allow network members to scrutinize the professional performance and personal integrity of their coworkers. Otherwise our ability to source actionable, time-sensitive information would be compromised.”
Network members with full PRP clearance will be issued security tokens and one-time pads to ensure encryption should the integrity of our network is compromised. They will also receive ID cards featuring a scannable QR code and/or barcode.
Use of these security measures will be mandatory, and loss or theft of ID cards will be taken seriously. Fortunately, these cards will be an interim solution and will be phased out as soon as our new security platform is deemed ready. We expect an early 2017 release.
Biometrics: Imperfect Marriage Of Convenience
We started experimenting with quasi-biometric security last year, quite by accident. After one Toptaler decided to tattoo our logo on their arm, we realized this approach could be employed for QR codes. Nobody wants to carry around yet another card in their wallet, and QR codes are relatively small and so they can be easily tattooed, or even engraved on fingernails.
You may be wondering whether or not we are serious, and the answer is obviously no. However, Graham’s tattoo gave us a good idea: Why not use biometric technology, backed by off-the-shelf tracking solutions?
We are already moving towards a passwordless future, and Toptal wants to be on the cutting edge. Why burden people with passwords, silly QR codes, two-factor authentication, or security tokens, if we can ensure superior security without any of them?
There have been attempts at this before, using personal technology such as smartphones and fingerprint scanners, but these techniques aren’t bulletproof. (In the case of smartphone fingerprint scanners, they can be beaten by a simple inkjet printer or knife.)
Besides, using smartphones for authentication opens up a Pandora’s box of other issues.
Bluetooth LE: Rendering Personal Security Bulletproof And Seamless
A lost phone is a recipe for disaster, and with all due respect for all the anti-theft and anti-loss technology out there, much of it doesn’t work well, or requires user input to do its magic. Besides, why rely solely on smartphones when we need to authenticate people on their office hardware?
A lost phone is called a lost phone for a reason, because the user is unaware that it’s lost to begin with. If you wake up and realize you lost your phone last night, it’s too late. That said, if you have a habit of waking up at strange places without your phone, or any recollection of the night before, you should also be on the lookout for kidney theft.
This is where it gets interesting. Security tokens and dongles work, but they’re a pain to carry around, and they have a habit of getting lost at the worst possible moment. That is why we planned for our ID cards to be a temporary measure, only active for 9 months or so. We intend to replace them with inexpensive, wearableBluetooth devices.
Yes, Toptalers will be required to carry them on their person at all times, but this won’t be a problem. Bluetooth LE is a killer technology, at least in terms of power consumption, and these devices can be secured with relative ease, providing a new layer of authentication (we can’t discuss the details due to NDA restrictions).
We initially tried a number of cheap fitness trackers and anti-loss tags to prove our approach was feasible. It worked, but these off-the-shelf devices were not ideally suited to our needs, so we set about designing our own, which proved to be surprisingly easy.
Enter The Toptal TopBand
We reached out to a number of reputable Chinese OEMs for consultation and technical input. We provided them with the specs, they provided us with their quote and a shipping date. Yes, it was that simple, and yes, we were pleasantly surprised.
We are currently in the process of going through several different Toptal TopBand designs and form factors, as well as working on the software side. These devices will not only interface with your phone and computer as wireless security tokens, they will also track your work and sleep habits.
Why? Because they can. They are based on hardware used in fitness trackers, so we didn’t need to reinvent the wheel and design the hardware from scratch. In fact, it would cost more to remove unnecessary features and sensors than to use off-the-shelf solutions.
Here are the specifications of our initial product:
- Bluetooth 4.0 chip manufactured by Dialog
- Accelerometer from ADI
- 50mAh lithium polymer battery by Sony, 40-day battery life
- Vibration assembly, three LED UI, notification speaker
- Dimensions: 8mm x 15mm x 35mm (estimate)
- Weight: 8g (estimate, without strap or clip-on)
We have not finalized the design yet, so the physical dimensions are just estimates. We are still in the process of deciding whether to use aluminium or polycarbonate for the housing, or a combination of both (we want it to look insanely cool). Either way, the device will be IP67 weather resistant, so you don’t even have to take it off when you hit the shower.
This is why we are convinced the device won’t be a nuisance. It’s tiny, you don’t have to charge it every other day, it can be carried as a standard fitness tracker on the wrist, keychain, and it can even fit in your wallet (as an added bonus, it can be used to alert users if they misplace their wallet or keys).
Of course, you could just pair it to your computer as wireless security device and forget about these features, but where’s the fun in that?
Here is what the TopBand brings to the table, allowing users to:
- Secure their hardware by limiting access to our platform if the TopBand is not paired and in range of the device.
- Locate misplaced phones, or vice versa (use a phone to find the TopBand).
- Receive notifications, via vibration and audio alarms.
- Collect physical activity data, which can be used to prevent burnout and keep track of your work habits (when used as a wearable).
The last point may prove controversial, but might be useful in some circumstances. For example, it will allow your team members to know whether or not you are awake and working, and it’s perfect for time tracking. Naturally, Toptal will not collect or use this data without prior consent. It’s there for your convenience; use it to improve your health and boost productivity.
Toptal Pet Project
While we were tinkering with the prototypes, a few Toptalers decided to create a potential spin-off, a pet project of sorts and when we say “pet project,” we literally mean pet project. A lot of our people are obsessed with their four-legged friends, so they went about devising ways of using our hardware in ways we did not expect: they turned the TopBand into a pet tracker.
The hardware was ready, so all it took was some tweaked code. We encouraged them to test the device on their pets; the data collected would prove valuable if only to ensure that unethical developers couldn’t cheat the system by mounting the TopBand on their cat and telling everyone they are at home, hard at work.
Pet-specific functionality is still being tested, but the results are encouraging. For the time being, the devices monitor basic activity, check whether or not your pet is asleep, and vibrate if the your pet strays out of range. It sounds a bit more humane than those nasty electric shock collars, doesn’t it?
Since cats and dogs come in all shapes and sizes, the biggest problem is sensor calibration, which the team is working on. The device was tested on a few cats, including a morbidly obese Italian feline, and dogs ranging from Jack Russells’ to Akita Inus.
Beyond that, we cannot reveal many details, and here is why; our developers have turned their pet project into a serious endeavour. They approached a few potential investors and secured funding for a limited commercial rollout (also scheduled for 2017), but this is just the first step towards a full pet product line.
Our team is already working on the next generation pet tracker, based on proprietary hardware, with wireless charging and the ability to be used as a subdermal implant.
Sounds Geeky, But Your Pets Will Love It
Subdermal implants have a bad reputation, but most of it is unjustified and peddled by conspiracy cranks. If you ask any pet professional, they will tell you that animals larger than a rat don’t even notice them, and in fact, they tend to be safer and more comfortable than most smart collars. Microchipping is already a widely supported practice globally to minimize stray pet populations; this just takes it one step further.
Until now, subdermal implants were limited to rudimentary RFID functionality and this limited their appeal. This isn’t a swipe at RFID tech; a lot of legit companies are working on RFID implants, and Dangerous Things is one startup that stands out in terms of innovation.
However, Qi wireless charging assemblies are getting smaller and cheaper with each new generation. This, obviously, allows engineers to design feature-packed implants because they can afford to use more battery power for sensors and always-on Bluetooth connectivity.
Unfortunately, we are still not there, and the first prototypes won’t be ready until 2018 at the earliest. Our hardware partners also informed us they won’t be able to conduct animal trials in mainland China, due to the country’s strict and inflexible animal rights legislation.
Therefore, the devices will be tested in Cambodia. We were assured the research would be ethical, so there’s nothing to worry about. Our team is eager to try out the implants on their own pets, and they wouldn’t dream of doing anything that would put their furry bundles of joy at risk.
If It’s Good Enough For My Dog…
This is where we hit a minor snag. Thanks to Toptal’s gung-ho culture, two of our team members volunteered to have the implants tested on them, not just their pets. While it’s still too early for human trials, it goes to show that people might not mind using subdermal implants, provided they can trust the technology. Since these individuals played a pivotal role in the development of our TopBand, they are eager to prove the concept. We are told it sort of gets under your skin after a while.
We need human subjects to test wireless charging and a few other features, as attempts to train cats and dogs to sit in one place for hours are unlikely to work. We settled on an alternative approach for the pilot stage, whereby the animals could still move around and recharge their implants, but this involves strapping a big powerbank and Qi charger mat to the animals. As an interim solution, we plan to make good use of ‘cat condo’ cages and catnip to prove the concept over the course of a few hours.
Human trials are still a long way off, and they require more planning and regulatory oversight. While this approach works for new drugs, we don’t have the time or resources necessary for clinical trials. However, our volunteers agreed to sign a waiver and have the implants installed anyway. Since this could create legal issues in the EU or US, they managed to find a small, Brazilian plastic surgery clinic willing to do the job. The clinic also offered a generous discount on gynecomastia procedures.
Toptal is looking for more volunteers, and there is no doubt in my mind that we will find them. After all, Google managed to find thousands of people eager to pay $1,500 for a useless wearable, only to stop development months later, and they still called it a success! These brave Explorers didn’t even mind being called Glassholes by the interwebs.
As one Toptal volunteer put it:
“I’d rather have an implant the size of an avocado in my groin, than Google Glass on my face!”
Note: No cats were harmed in the making of this post.
This article was written by NERMIN HAJDARBEGOVIC, a Toptal editor.
Developing for the Cloud in the Cloud: BigData Development with Docker in AWS
Why you may need it?
I am a developer, and I work daily in Integrated Development Environments (IDE), such as Intellij IDEA or Eclipse. These IDEs are desktop applications. Since the advent of Google Documents, I have seen more and more people moving their work from desktop versions of Word or Excel to the cloud using an online equivalent of a word processor or a spreadsheet application.
There are obvious reasons for using a cloud to keep your work. Today, compared to the traditional desktop business applications, some web applications do not have a significant disadvantage in functionalities. The content is available wherever there is a web browser, and these days, that’s almost everywhere. Collaboration and sharing are easier, and losing files is less likely.
Unfortunately, these cloud advantages are not as common in the world of software development as is for business applications. There are some attempts to provide an online IDE, but they are nowhere close to traditional IDEs.
That is a paradox; while we are still bound to our desktop for daily coding, the software is now spawned on multiple servers. Developers needs to work with stuff they cannot keep any more on their computer. Indeed, laptops are no longer increasing their processing power; having more than 16GB of RAM on a laptop is rare and expensive, and newer devices, tablets, for example, have even less.
However, even if it is not yet possible to replace classic desktop applications for software development, it is possible to move your entire development desktop to the cloud. The day I realized it it was no longer necessary to have all my software on my laptop, and noticing the availability of web version of terminals and VNC, I moved everything to the cloud. Eventually, I developed a build kit for creating that environment in an automated way.
In this article I present a set of scripts to build a cloud-based development environment for Scala and big data applications, running with Docker in Amazon AWS, and comprising of a web-accessible desktop with IntelliJ IDE, Spark, Hadoop and Zeppelin as services, and also command line tools like a web based SSH, SBT and Ammonite. The kit is freely available on GitHub, and I describe here the procedure for using it to build your instance. You can build your environment and customize it to your particular needs. It should not take you more than 10 minutes to have it up and running.
What is in the “BigDataDevKit”?
My primary goal in developing the kit was that my development environment should be something I can simply fire up, with all the services and servers I work with, and then destroy them when they are no longer needed. This is especially important when you work on different projects, some of them involving a large number of servers and services, as when you work on big data projects.
My ideal cloud-based environment should:
- Include all the usual development tools, most importantly a graphical IDE.
- Have the servers and services I need at my fingertips.
- Be easy and fast to create from scratch, and expandable to add more services.
- Be entirely accessible using only a web browser.
- Optionally, allow access with specialized clients (VNC client and SSH client).
Leveraging modern cloud infrastructure and software, the power of modern browsers, a widespread availability of broadband, and the invaluable Docker, I created a development environment for Scala and big data development that, for the better, replaced my development laptop.
Currently, I can work at any time, either from a MacBook Pro, a Surface Tablet, or even an iPad (with a keyboard), although admittedly the last option is not ideal. All these devices are merely clients; the desktop and all the servers are in the cloud.
My current environment is built using following online services:
- Amazon Web Services for the servers.
- GitHub for storing the code.
- Dropbox to save files.
I also use a couple of free services, like DuckDns for dynamic IP addresses and Let’s encrypt to get a free SSL certificate.
In this environment, I currently have:
- A graphical desktop with Intellij idea, accessible via a web browser.
- Web accessible command line tools like SBT and Ammonite.
- Hadoop for storing files and running MapReduce jobs.
- Spark Job Server for scheduled jobs.
- Zeppelin for a web-based notebook.
Most importantly, the web access is fully encrypted with HTTPS, for both web-based VNC and SSH, and there are multiple safeguards to avoid losing data, a concern that is, of course, important when you do not “own” the content on your physical hard disk. Note that getting a copy of all your work on your computer is automatic and very fast. If you lose everything because someone stole your password, you have a copy on your computer anyway, as long as you configured everything correctly.
Using a Web Based Development Environment with AWS and Docker
Now, let’s start describing how the environment works. When I start work in the morning, the first thing is to log into the Amazon Web Services console where I see all my instances. Usually, I have many development instances configured for different projects, and I keep the unused ones turned off to save billing. After all, I can only work on one project at a time. (Well, sometimes I work on two.)
So, I select the instance I want, start it, I wait for a little or go grab a cup of coffee. It’s not so different to turning on your computer. It usually takes a bunch of seconds to have the instance up and running. Once I see the green icon, I open a browser, and I go to a well known URL: https://msciab.duckdns.org/vnc.html
. Note, this is my URL; when you create a kit, you will create your unique URL.
Since AWS assigns a new IP to each machine when you start, I configured a dynamic DNS service, so you can always use the same URL to access your server, even if you stop and restart it. You can even bookmark it in your browser. Furthermore, I use HTTPS, with valid keys to get total protection of my work from sniffers, in case I need to manage passwords and other sensitive data.
Once loaded, the system will welcome you with a Web VNC web client, NoVNC. Simply log in and a desktop appears. I use a minimal desktop, intentionally, just a menu with applications, and my only luxury is a virtual desktop (since I open a lot of windows when I develop). For mail, I still rely on other applications, nowadays mostly other browser tabs.
In the virtual machine, I have what I need to develop big data applications. First and foremost, there is an IDE. In the build, I use the IntelliJ Idea community edition. Also, there is the SBT build tool and a Scala REPL, Ammonite.
The key features of this environment, however, are services deployed as containers in the same virtual machine. In particular, I have:
- Zeppelin, the web notebook for using Scala code on the fly and doing data analysis (
http://zeppelin:8080
) - The Spark Job Server, to execute and deploy spark jobs with a Rest interface (
http://sparkjobserver:8080
). - An instance of Hadoop for storing and retrieving data from the HDFS (
http://hadoop:50070
).
Note, these URLs are fixed but are accessible within the virtual environment. You can see their web interfaces in the following screenshot.
Each service runs in a separate Docker container. Without becoming too technical, you can think of this as three separate servers inside your virtual machine. The beauty of using Docker is you can add services, and even add two or three virtual machines. Using Amazon containers, you can scale your environment easily.
Last, but not least, you have a web terminal available. Simply access your URL with HTTPS and you will be welcomed with a terminal in a web page.
In the screenshot above you can see I list the containers, which are the three servers plus the desktop. This command line shell gives you access to the virtual machine holding the containers, allowing you to manage them. It’s as if your servers are “in the Matrix” (virtualized within containers), but this shell gives you an escape outside the “Matrix” to manage servers, and desktop. From here, you can restart the containers, access their filesystems and perform other manipulations allowed by Docker. I will not discuss in detail Docker here, but there is a vast amount of documentation on Docker website.
How to setup your instance
Do you like this so far, and you want your instance? It is easy and cheap. You can get it for merely the cost of the virtual machine on Amazon Web Services, plus the storage. The kit in the current configuration requires 4GB of RAM to get all the services running. If you are careful to use the virtual machine only when you need it, and you work, say, 160 hours a month, a virtual machine at current rates will cost 160 x $0.052, or $8 per month. You have to add the cost of storage. I use around 30GB, but everything altogether can be kept under $10.
However, this does bot include the cost of an (eventual) Dropbox (Pro) account, should you want to backup more than 2GB of code. This costs another $15 per month, but it provides important safety for your data. Also, you will need a private repository, either a paid GitHub or another service, such as Bitbucket, which offers free private repositories.
I want to stress that if you use it only when you need it, it is cheaper than a dedicated server. Yes, everything mentioned here can be setup on a physical server, but since I work with big data I need a lot of other AWS services, so I think it is logical to have everything in the same place.
Let’s see how to do the whole setup.
Prerequisites
Before starting to build a virtual machine, you need to register with the following four services:
The only one you need your credit card for is Amazon Web Services. DuckDns is entirely free, while DropBox gives you 2GB of free storage, which can be enough for many tasks. Let’s Encrypt is also free, and it is used internally when you build the image to sign your certificate. Besides these, I recommend a repository hosting service too, like GitHub or Bitbucket, if you want to store your code, however, it is not required for the setup.
To start, navigate to the GitHub BigDataDevKit repository.
Scroll the page and copy the script shown in the image in your text editor of choice:
This script is needed to bootstrap the image. You have to change it and provide some values to the parameters. Carefully, change the text within the quotes. Note you cannot use characters like the quote itself, the backslash or the dollar sign in the password, unless you quote them. This problem is relevant only for the password. If you want to play safe, avoid a quote, dollar sign, or backslashes.
The PASSWORD
parameter is a password you choose to access the virtual machine via a web interface. The EMAIL
parameter is your email, and will be used when you register an SSL certificate. You will be required to provide your email, and it is the only requirement for getting a free SSL Certificate from Let’s Encrypt.
To get the values for TOKEN
and HOST
, go to the DuckDNS site and log in. You will need to choose an unused hostname.
Look at the image to see where you have to copy the token and where you have to add your hostname. You must click on the “add domain” button to reserve the hostname.
Assuming you have all the parameters and have edited the script, you are ready to launch your instance. Log in to the Amazon Web Services management interface, go to the EC2 Instances panel and click on “Launch Instance”.
In the first screen, you will choose an image. The script is built around the Amazon Linux, and there are no other options available. Select Amazon Linux, the first option in the QuickStart list.
On the second screen, choose the instance type. Given the size of the software running, there are multiple services and you need at least 4GB of memory, so I recommend you select the t2.medium instance. You could trim it down, using the t2.small if you shut down some services, or even the micro if you only want the desktop.
On the third screen, click “Advanced Details” and paste the script you configured in the previous step. I also recommend you enable protection against termination, so that with an accidental termination you won’t lose all your work.
The next step is to configure the storage. The default for an instance is 8GB, which is not enough to contain all the images we will build. I recommend increasing it to 20GB. Also, while it is not needed, I suggest another block device of at least 10GB. The script will mount the second block device as a data folder.You can make a snapshot of its contents, terminate the instance, then recreate it using the snapshot and recovering all the work. Furthermore, a custom block device is not lost when you terminate the instance so have double protection against accidental loss of your data. To increase your safety even further, you can backup your data automatically with Dropbox.
The fifth step is naming the instance. Pick your own. The sixth step offers a way to configure the firewall. By default only SSH is available but we also need HTTPS, so do not forget to add also a rule opening HTTPS. You could open HTTPS to the world, but it’s better if it’s only to your IP to prevent others from accessing your desktop and shell, even though that is still protected with a password.
Once done with this last configuration, you can launch the instance. You will notice that the initialization can take quite a few minutes the first time since the initialization script is running and it will also do some lengthy tasks like generating an HTTPS certificate with Let’s Encrypt.
When you eventually see the management console “running” with a confirmation, and it is no longer “initializing”, you are ready to go.
Assuming all the parameters are correct, you can navigate to https://YOURHOST.duckdns.org
.
Replace YOURHOST
with the hostname you chose, but do not forget it is an HTTPS site, not HTTP, so your connection to the server is encrypted so you must write https//
in the URL. The site will also present a valid certificate for Let’s Encrypt. If there are problems getting the certificate, the initialization script will generate a self-signed certificate. You will still be able to connect with an encrypted connection, but the browser will warn you it is an unknown site, and the connections are insecure. It should not happen, but you never know.
Assuming everything is working, you then access the web terminal, Butterfly. You can log in using the user app
and the password you put in the setup script.
Once logged in, you have a bootstrapped virtual machine, which also includes Docker and other goodies, such as a Nginx Frontend, Git, and the Butterfly Web Terminal. Now, you can complete the setup by building the Docker images for your development environment.
Next, type the following commands:
git clone https://github.com/sciabarra/BigDataDevKit
cd BigDataDevKit
sh build.sh
The last command will also ask you to type a password for the Desktop access. Once done, it will start to build the images. Note the build will take a about 10 minutes, but you can see what is happening because everything is shown on the screen.
Once the build is complete, you can also install Dropbox with the following command:
/app/.dropbox-dist/dropboxd
The system will show a link you must click to enable Dropbox. You need to log into Dropbox and then you are done. Whatever you put in the Dropbox folder is automatically synced between all your Dropbox instances.
Once done, you can restart the virtual machine, and access your environment at the https://YOURHOST.dyndns.org/vnc.html
URL.
You can stop your machine and restart it when you resume work. The access URL stay the same. This way, you will pay only for the time you are using it, plus monthly extra for the used storage.
Preserving your data
The following discussion requires some knowledge of how Docker and Amazon works. If you do not want to understand the details, just keep in mind following simple rule: In the virtual machine, there is an /app/Dropbox
folder available, whatever you place in /app/Dropbox
is preserved, and everything else is disposable and can go away. To improve security further, also store your precious code in a version control system.
Now, if you do want to understand this, read on. If you followed my directions in the virtual machine creation, the virtual machine is protected from termination, so you cannot destroy it accidentally. If you expressly decide to terminate it, the primary volume will be destroyed. All the Docker images will be lost, including all the changes you made.
However, since the folder /app/Dropbox
is mounted as a Docker Volume for containers, it is not part of Docker images. In the virtual machine, the folder /app
is mounted in the Amazon Volume you created, which is also not destroyed even when you expressly terminate the virtual machine. To remove the volume, you have to remove it expressly.
Do not confuse Docker volumes, which are a Docker logical entity, with Amazon Volumes, which is a somewhat physical entity. What happens is that the /app/Dropbox
Docker volume is placed inside the /app
Amazon volume.
The Amazon Volume is not automatically destroyed when you terminate the virtual machine, so whatever is placed in it will be preserved, until you also expressly destroy the volume. Furthermore, whatever you put in the Docker volume is stored outside of the container, so it is not destroyed when the container is destroyed. If you enabled Dropbox, as recommended, all your content is copied to the Dropbox servers, and to your hard disk if you sync Dropbox with your computer(s). Also, it is recommended that the source code be stored in a version control system.
So, if you place your stuff in version control system under the Dropbox folder, to lose your data all of this must happen:
- You expressly terminate your virtual machine.
- You expressly remove the data volume from the virtual machine.
- You expressly remove the data from Dropbox, including the history.
- You expressly remove the data from the version control system.
I hope your data is safe enough.
I keep a virtual machine for each project, and when I finish, I keep the unused virtual machines turned off. Of course, I have all my code on GitHub and backed up in Dropbox. Furthermore, when I stop working on a project, I take a snapshot of the Amazon Web Services block before removing the virtual machine entirely. This way, whenever a project resumes, for example for maintenance, all I need to do is start a new virtual machine using the snapshot. All my data goes back in place, and I can resume working.
Optimizing access
First, if you have direct internet access, not mediated by a proxy, you can use native SSH and VNC clients. Direct SSH access is important if you need to copy files in and out of the virtual machine. However, for file sharing, you should consider Dropbox as a simpler alternative.
The VNC web access is invaluable, but sometimes, it can be slower than a native client. You have access to the VNC server on the virtual machine using port 5900. You must expressly open it because it is closed by default. I recommend that you only open it to your IP address, because the internet is full of “robots” that scan the internet looking for services to hook into, and VNC is a frequent target of those robots.
Conclusion
This article explains how you can leverage modern cloud technology to implement an effective development environment. While a machine in the cloud cannot be a complete replacement for your working computer or a laptop, it is good enough for doing development work when it is important to have access to the IDE. In my experience, with current internet connections, it is fast enough to work with.
Being in the cloud, server access and manipulation is faster than having them locally. You can quickly increase (or decrease) memory, fire up another environment, create an image, and so on. You have a datacenter at your fingertips, and when you work with big data projects, well, you need robust services and lots of space. That is what the cloud provides.
The original article was written by MICHELE SCIABARRA – FREELANCE SOFTWARE ENGINEER @ TOPTAL and can be read here.
If you’d like to learn more about Toptal designers or hire one, check this out.
Avoid the 10 Most Common Mistakes Web Developers Make: A Tutorial for Developers
The following article is a guest post from Toptal. Toptal is an elite network of freelancers that enables businesses to connect with the top 3% of software engineers and designers in the world.
Since the term the World Wide Web was coined back in 1990, web application development has evolved from serving static HTML pages to completely dynamic, complex business applications.
Today we have thousands of digital and printed resources that provide step-by-step instructions about developing all kinds of different web applications. Development environments are “smart” enough to catch and fix many mistakes that early developers battled with regularly. There are even many different development platforms that easily turn simple static HTML pages into highly interactive applications.
All of these development patterns, practices, and platforms share common ground, and they are all prone to similar web development issues caused by the very nature of web applications.
The purpose of these web development tips is to shed light on some of the common mistakes made in different stages of the web development process and to help you become a better developer. I have touched on a few general topics that are common to virtually all web developers such as validation, security, scalability, and SEO. You should of course not be bound by the specific examples I’ve described in this guide, as they are listed only to give you an idea of the potential problems you might encounter.
Common mistake #1: Incomplete input validation
Validating user input on client and server side is simply a must do! We are all aware of the sage advice “do not trust user input” but, nevertheless, mistakes stemming from validation happen all too often.
One of the most common consequences of this mistake is SQL Injection which is in OWASP Top 10 year after year.
Remember that most front-end development frameworks provide out-of-the-box validation rules that are incredibly simple to use. Additionally, most major back-end development platforms use simple annotations to assure that submitted data are adhering to expected rules. Implementing validation might be time consuming, but it should be part of your standard coding practice and never set aside.
Common mistake #2: Authentication without proper Authorization
Before we proceed, let’s make sure we are aligned on these two terms. As stated in the 10 Most Common Web Security Vulnerabilities:
Authentication: Verifying that a person is (or at least appears to be) a specific user, since he/she has correctly provided their security credentials (password, answers to security questions, fingerprint scan, etc.).
Authorization: Confirming that a particular user has access to a specific resource or is granted permission to perform a particular action.
Stated another way, authentication is knowing who an entity is, while authorization is knowing what a given entity can do.
Let me demonstrate this issue with an example:
Consider that your browser holds currently logged user information in an object similar to the following:
{
username:'elvis',
role:'singer',
token:'123456789'
}
When doing a password change, your application makes the POST:
POST /changepassword/:username/:newpassword
In your /changepassword
method, you verify that user is logged and token has not expired. Then you find the user profile based on the :username
parameter, and you change your user’s password.
So, you validated that your user is properly logged-in, and then you executed his request thus changing his password. Process seems OK, right? Unfortunately, the answer is NO!
At this point it is important to verify that the user executing the action and the user whose password is changed are the same. Any information stored on the browser can be tampered with, and any advanced user could easily update username:'elvis'
to username:'Administrator'
without using anything else but built-in browser tools.
So in this case, we just took care of Authentication
making sure that the user provided security credentials. We can even add validation that /changepassword
method can only be executed by Authenticated
users. However, this is still not enough to protect your users from malicious attempts.
You need to make sure that you verify actual requestor and content of request within your /changepassword
method and implement proper Authorization
of the request making sure that user can change only her data.
Authentication
and Authorization
are two sides of the same coin. Never treat them separately.
Common mistake #3: Not ready to scale
In today’s world of high speed development, startup accelerators, and instant global reach of great ideas, having your MVP (Minimum Viable Product) out in the market as soon as possible is a common goal for many companies.
However, this constant time pressure is causing even good web development teams to often overlook certain issues. Scaling is often one of those things teams take for granted. The MVP concept is great, but push it too far, and you’ll have serious problems. Unfortunately, selecting a scalable database and web server and separating all application layers on independent scalable servers is not enough. There are many details you need to think about if you wish to avoid rewriting significant parts of your application later – which becomes a major web development problem.
For example, say that you choose to store uploaded profile pictures of your users directly on a web server. This is a perfectly valid solution–files are quickly accessible to the application, file handling methods are available in every development platform, and you can even serve these images as static content, which means minimum load on your application.
But what happens when your application grows, and you need to use two or more web servers behind a load balancer? Even though you nicely scaled your database storage, session state servers, and web servers, your application scalability fails because of a simple thing like profile images. Thus, you need to implement some kind of file synchronization service (that will have a delay and will cause temporary 404 errors) or another workaround to assure that files are spread across your web servers.
What you needed to do to avoid the problem in the first place was just use shared file storage location, database, or any other remote storage solution. It would have probably cost few extra hours of work to have it all implemented, but it would have been worth the trouble.
Common mistake #4: Wrong or missing SEO
The root cause of incorrect or missing SEO best practices on web sites is misinformed “SEO specialists”. Many web developers believe that they know enough about SEO and that it is not especially complex, but that’s just not true. SEO mastery requires significant time spent researching best practices and the ever-changing rules about how Google, Bing, and Yahoo index the web. Unless you constantly experiment and have accurate tracking + analysis, you are not a SEO specialist, and you should not claim to be one.
Furthermore, SEO is too often postponed as some activity that is done at the end. This comes at a high price of web development issues. SEO is not just related to setting good content, tags, keywords, meta-data, image alt tags, site map, etc. It also includes eliminating duplicate content, having crawlable site architecture, efficient load times, intelligent back linking, etc.
Like with scalability, you should think about SEO from the moment you start building your web application, or you might find that completing your SEO implementation project means rewriting your whole system.
Common mistake #5: Time or processor consuming actions in request handlers
One of the best examples of this mistake is sending email based on a user action. Too often developers think that making a SMTP call and sending a message directly from user request handler is the solution.
Let’s say you created an online book store, and you expect to start with a few hundred orders daily. As part of your order intake process, you send confirmation emails each time a user posts an order. This will work without problem at first, but what happens when you scale your system, and you suddenly get thousands of requests sending confirmation emails? You either get SMTP connection timeouts, quota exceeded, or your application response time degrades significantly as it is now handling emails instead of users.
Any time or processor consuming action should be handled by an external process while you release your HTTP requests as soon as possible. In this case, you should have an external mailing service that is picking up orders and sending notifications.
Common mistake #6: Not optimizing bandwidth usage
Most development and testing takes place in a local network environment. So when you are downloading 5 background images each being 3MB or more, you might not identify an issue with 1Gbit connection speed in your development environment. But when your users start loading a 15MB home page over 3G connections on their smartphones, you should prepare yourself for a list of complaintsand problems.
Optimizing your bandwidth usage could give you a great performance boost, and to gain this boost you probably only need a couple of tricks. There are few things that many good web deveopers do by default, including:
- Minification of all JavaScript
- Minification of all CSS
- Server side HTTP compression
- Optimization of image size and resolution
Common mistake #7: Not developing for different screen sizes
Responsive design has been a big topic in the past few years. Expansion of smartphones with different screen resolutions has brought many new ways of accessing online content, which also comes with a host of web development issues. The number of website visits that come from smartphones and tablets grows every day, and this trend is accelerating.
In order to ensure seamless navigation and access to website content, you must enable users to access it from all types of devices.
There are numerous patterns and practices for building responsive web applications. Each development platform has its own tips and tricks, but there are some frameworks that are platform independent. The most popular is probably Twitter Bootstrap. It is an open-source and free HTML, CSS, and JavaScript framework that has been adopted by every major development platform. Just adhere to Bootstrap patterns and practices when building your application, and you will get responsive web application with no trouble at all.
Common mistake #8: Cross browser incompatibility
The development process is, in most cases, under a heavy time pressure. Every application needs to be released as soon as possible and even good web developers are often focused on delivering functionality over design. Regardless of the fact that most developers have Chrome, Firefox, IE installed, they are using only one of these 90% of the time. It is common practice to use one browser during development and just as the application nears completion will you start testing it in other browsers. This is perfectly reasonable–assuming you have a lot of time to test and fix issues that show up at this stage.
However, there are some web development tips that can save you significant time when your application reaches the cross-browser testing phase:
- You don’t need to test in all browsers during development; it is time consuming and ineffective. However, that does not mean that you cannot switch browsers frequently. Use a different browser every couple of days, and you will at least recognize major problems early in development phase.
- Be careful of using statistics to justify not supporting a browser. There are many organizations that are slow in adopting new software or upgrading. Thousands of users working there might still need access to your application, and they cannot install the latest free browser due to internal security and business policies.
- Avoid browser specific code. In most cases there is an elegant solution that is cross-browser compatible.
Common mistake #9: Not planning for portability
Assumption is the mother of all problems! When it comes to portability, this saying is more true than ever. How many times have you seen issues in web development like hard coded file paths, database connection strings, or assumptions that a certain library will be available on the server? Assuming that the production environment will match your local development computer is simply wrong.
Ideal application setup should be maintenance-free:
- Make sure that your application can scale and run on a load-balanced multiple server environment.
- Allow simple and clear configuration–possibly in a single configuration file.
- Handle exceptions when web server configuration is not as expected.
Common mistake #10: RESTful anti patterns
RESTful API’s have taken their place in web development and are here to stay. Almost every web application has implemented some kind of REST services, whether for internal use or integrating with external system. But we still see broken RESTful patterns and services that do not adhere to expected practices.
Two of the most common mistakes made when writing a RESTful API are:
- Using wrong HTTP verbs. For example using GET for writing data. HTTP GET has been designed to be idempotent and safe, meaning that no matter how many times you call GET on the same resource, the response should always be the same and no change in application state should occur.
- Not sending correct HTTP status codes. The best example of this mistake is sending error messages with response code 200.
HTTP 200 OK { message:'there was an error' }
You should only send HTTP 200 OK when the request has not generated an error. In the case of an error, you should send 400, 401, 500 or any other status code that is appropriate for the error that has occurred.
A detailed overview of standard HTTP status codes can be found here.
Wrap up
Web development is an extremely broad term that can legitimately encompass development of a website, web service, or complex web application.
The main takeaway of this web development guide is the reminder that you should always be careful about authentication and authorization, plan for scalability, and never hastily assume anything – or be ready to deal with a long list of web development problems!
With a Filter Bypass and Some Hexadecimal, Hacked Credit Card Numbers Are Still, Still Google-able
Preface
If you know me, or have read my previous post, you know that I worked for a very interesting company before joining Toptal. At this company, our payment provider processed transactions in the neighborhood of $500k per day. Part of my job was to make our provider PCI-DSS compliant—that is, compliant with the Payment Card Industry – Data Security Standard.
It’s safe to say that this wasn’t a job for the faint of heart. At this point, I’m pretty intimate with Credit Cards (CCs), Credit Card fraud and web security in general. After all, our job was to protect our users’ data, to prevent it from being hacked, stolen or misused.
You could imagine my surprise when I saw Bennett Haselton’s 2007 article on Slashdot: Why Are CC Numbers Still So Easy to Find?. In short, Haselton was able to find Credit Card numbers through Google, firstly by searching for a card’s first eight digits in “nnnn nnnn” format, and later using some advanced queries built on number ranges. For example, he could use “4060000000000000..4060999999999999” to find all the 16 digit Primary Account Numbers (PANs) from CHASE (whose cards all begin with 4060). By the way: here’s a full list of Issuer ID numbers.
At the time, I didn’t think much of it, as Google immediately began to filter the types of queries that Bennett was using. When you tried to Google a range like that, Google would serve up a page that said something along the lines of “You’re a bad person”.
About six months ago, while reminiscing with an old friend, this credit card number hack came to mind again. Soon-after, I discovered something alarming. Not terribly alarming, but certainly alarming—so I notified Google, and waited. After a month without a response, I notified them again to no avail.
With a minor tweak on Haselton’s old trick, I was able to Google Credit Card numbers, Social Security numbers, and any other sensitive information of interest.
Bennett
Yesterday, some friends of mine (buhera.blog.hu and _2501) brought a more recent Slashdot post to my attention: Credit Card Numbers Still Google-able.
The article’s author, again Bennett Haselton, who wrote the original article back in 2007, claims that credit card numbers can still be Googled. You can’t use the number range query hack, but it still can be done. Instead of using simple ranges, you need to apply specific formatting to your query. Something like: “1234 5678” (notice the space in the middle). A lot of hits come up for this query, but very few are of actual interest. Among the contestants are phone numbers, zip-codes, and such. Not extremely alarming. But here comes the credit card hack twist.
The “Methodology”
I was curious if it was still possible to get credit card numbers online the way we could in 2007. As any good Engineer, I usually approach things using a properly construed and intelligent plan that needs to be perfectly executed with the utmost precision. If you have tried that method, you might know that it can fail really hard—in which case your careful planning and effort goes to waste.
In IT we have a tendency to over-intellectualize, even when it isn’t exactly warranted. I have seen my friends and colleagues completely break applications using seemingly random inputs. Their success rate was stunning and the effort they put into it was close to zero. That’s when I learned that to open a door, sometimes you just have to knock.
The Credit Card “Hack”
The previous paragraph was a cleverly disguised attempt to make me look like less of an idiot when I show off my “elite hacking skills”. Oops.
First, I tried several range-query-based approaches. Then, I looked at advanced queries and pretty much anything you might come up with in an hour or so. None of them yielded significant results.
And then I had a crazy idea.
What if there was a mismatch between the filtering engine and the actual back-end? What if the message I got from Google (“You are a bad person”) wasn’t from the back-end itself, but instead from a designated filtering engine Google had implemented to censor queries like mine?
It would make a lot of sense from an architectural perspective. And bugs like that are pretty common—we see them in ITSEC all the time, particularly in IDS/IPS solutions, but also in common software. There’s a filtering procedure that processes data and only gives it to the back-end if it thinks the data is acceptable/non-malicious. However, the back-end and the filtering server almost never parse the input in exactly the same way. Thus, a seemingly valid input can go through the filter and wreak havoc on the back-end, effectively bypassing the filter.
You can usually trigger this type of behavior by providing your input in various encodings. For example: instead of using decimal numbers (0-9), how about converting them to hexadecimal or octal or binary? Well, guess what…
Search for this and Google will tell you that you’re a bad person: “4060000000000000..4060999999999999”
Search for this and Google will be happy to oblige: “0xe6c8c69c9c000..0xe6d753e6ecfff”.
The only thing you need to do is to convert credit card numbers from decimal to hexadecimal. That’s it.
The results include…
- Humongous CSV files filled with potentially sensitive information.
- Faulty e-commerce log files.
- Sensitive information shared on hacker sites (and even Facebook).
It’s truly scary stuff.
I know this bug won’t inspire any security research, but there you have it. Google made this boo-boo and neglected to even write me back. Well, it happens. I don’t envy the security folks at the big G, though. They must have a lot of stuff to look out for. I’m posting about this credit card number hack here because:
- It’s relatively low impact.
- Anyone who’s interested and motivated will have figured this out by now.
- To quote Haselton, if the big players aren’t taking responsibility and acting on these exploits, then “the right thing to do is to shine a light on the problem and insist that they fix it as soon as possible”.
This trick can be used to look up phone numbers, SSNs, TFNs, and more. And, as Bennett wrote, these numbers are much much harder to change than your Credit Card, for which you can simply call your bank and cancel the card.
Sample Queries
WARNING: Do NOT Google your own credit card number in full!
Look for any CC PAN starting with 4060: 4060000000000000..4060999999999999 ? 0xe6c8c69c9c000..0xe6d753e6ecfff
Some Hungarian phone numbers from the provider ‘Telenor’? No problem: 36200000000..36209999999 ? 0x86db02a00..0x86e48c07f
Look for SSNs. Thankfully, these don’t return many meaningful results: 100000000..999999999 ? 0x5f5e100..0x3b9ac9ff
There are many, many more.
If you find anything very alarming, or if you’re curious about credit card hacking, please leave it in the comments or contact me by email at [email protected] or on Twitter at @synsecblog. Calling the police is usually futile in these cases, but it might be worth a try. The given merchant or the card provider is usually more keen to address the issue.
Where to Go From Here
Well, Google obviously has to fix this, possibly with the help of the big players like Visa and Mastercard. In fact, Haselton provides a number of interesting suggestions in the two articles linked above.
What you need to do, however (and why I’ve written this post), is spread the word. Credit Card fraud is a big industry, and simple awareness can save you from becoming a victim. Further, if you have an e-commerce site or handle any credit card processing, please make sure that you’re secure. PCI-DSS is a good guideline, but it is far from perfect. Plus, it is always a good idea to Google your site with the “site:mysite.com” advanced query, looking for sensitive numbers. There’s a very, very slim chance that you’ll find anything—but if you do, you must act on it immediately.
Also, a bit of friendly advice: You should never give out your credit card information to anyone. My advice would be to use PayPal or a similar service whenever possible. You can check out these links for further information:
And a few general tips: don’t download things you didn’t ask for, don’t open spam emails, and remember that your bank will never ask for your password.
By the way: If you think there’s no one stupid enough to fall for these credit card hacking techniques or give away their credit card information on the internet, have a look at @NeedADebitCard.
Stay safe people!
This post originally posted on The Toptal Engineering Blog