Tuesday, March 26, 2019

API unit testing

Test open API service

 php artisan make:test SampleTest 

You can use above artisan command to make test command.
If you want to use repositories in Test file you have to follow below steps. 

Step 1: Create setup function and add followings just like in constructors. 


public function setUp(){    parent::setUp();    $this->application = $this->app;    $this->collection = new Collection();
}

Step 2: use Collection.

use Illuminate\Support\Collection;

Step 3: Add repositories to setUp function


public function setUp(){    parent::setUp();    $this->application = $this->app;    $this->collection = new Collection();    $this->user_account = new AccountRepository($this->application, $this->collection);    $this->role = new RoleRepository($this->application, $this->collection);
} 




Share/Bookmark

Tuesday, February 26, 2019

Nodejs upload file to S3 bucket and create signed url

AWS.config.update({
    region: 'region',
    accessKeyId: 'adadadada',
    secretAccessKey: 'aadadadadad'});

const s3 = new AWS.S3();
const today = new Date();
const token = 'test-upload';
return new Promise((resolve, reject) => {
    // console.log(req.files.inputFile);    
fs.readFile(req.files.inputFile.path, (err, fileBuffer) => {
        const params = {
            ACL: 'authenticated-read',            
            Bucket: 'dadadadadada',
            ContentType: 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet',
            Key: `test-upload/${token}.xlsx`,
            Body: fileBuffer
        };
        s3.putObject(params, (perr) => {
            if (perr) {
                reject(perr);
            } else {
                const gsuparams = { Bucket: params.Bucket, Key: params.Key, Expires: 60 * 24};
                s3.getSignedUrl('getObject', gsuparams, (gsuerr, url) => {
                    if (gsuerr) {
                        reject(gsuerr);
                    } else {
                        resolve({
                            url: `${token}.xlsx`,
                            url2: url
                        });
                    }
                });
            }
        });
    });
});

Share/Bookmark

Monday, February 25, 2019

Nodejs Queue with AWS SQS

Step 1: Install docker


This digital ocean article describes properly how to install docker on ubuntu
install docker article



Step 2: Install AWS SQS queue service docker image in local 

You can manage Nodejs queues using AWS SQS in your local machine. 

https://github.com/vsouza/docker-SQS-local

docker pull vsouza/sqs-local;
docker run -d -p 9324:9324 vsouza/sqs-local;


Now this sqs queue run locally using 9324 port.
SQS_HOST=http://localhost:9324
Step 3: Install AWS SDK for Nodejs

https://www.npmjs.com/package/aws-sdk


Step 4: Configure to use SQS in local

let SQS_HOST="http://localhost:9324";

const sqsConfig = {
    endpoint: new AWS.Endpoint('https://sqs.us-west-2.
    amazo.com/22222/queue.sqs'),
    accessKeyId: 'dadada',
    secretAccessKey: 'aadwfe',
    region: 'us-west-2'};

if (SQS_HOST) {
    sqsConfig.endpoint = new AWS.Endpoint(SQS_HOST);
}
const sqs = new AWS.SQS(sqsConfig);


var params = {
    QueueName: 'test', /* required */    // Attributes: {    //     '<QueueAttributeName>': 'STRING_VALUE',    //     /* '<QueueAttributeName>': ... */    // }};

sqs.createQueue(params, function(err, mysql_test) {
    if (err){
        console.log(err, err.stack); // an error occurred    }
    else {

        console.log('Success');
    }
});

You can find SQS queue (aws-sdk node package) documentation below.
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/SQS.html#createQueue-property 

Share/Bookmark

Thursday, July 5, 2018

Set up jenkins

Follow this link for install jenkins in ubuntu.
https://www.digitalocean.com/community/tutorials/how-to-install-jenkins-on-ubuntu-16-04


Step 1 : Click on new item then select Free style project.

Step 2 : Go to customize project. Tick on gitHub project under general tab and add git project url.

Step 3 : Under source code management tick on git. Then add git url and credentials.

Step 4: If you want to auto build when push code to git, tick on GitHub hook trigger for GITScm polling   .

Step 5: Select execute shell under Build section.
Here is the example of execute shell commands

ROOT="$PWD"
APIPROJECT=api
WEBPROJECT=web2
SEARCHPROJECT=search

cd "${ROOT}/${APIPROJECT}"
composer install
cat << EOF > .env
APP_NAME=Laravel
APP_ENV=local
APP_KEY=base64:Snp4kGO0Nq4wbcK4B9tzUgcewMebwJ9he+MLs5a4NtU=
APP_DEBUG=false
APP_LOG_LEVEL=debug
APP_URL=

DB_CONNECTION=mysql
DB_HOST=
DB_PORT=3306
DB_DATABASE=
DB_USERNAME=
DB_PASSWORD=

BROADCAST_DRIVER=log
CACHE_DRIVER=file
SESSION_DRIVER=file
SESSION_LIFETIME=120
QUEUE_DRIVER=sqs

SQS_KEY=
SQS_SECRET=
SQS_PREFIX=
SQS_QUEUE=
SQS_REGION=us-west-2

REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379

MAIL_DRIVER=smtp
MAIL_HOST=email-smtp.us-west-2.amazonaws.com
MAIL_PORT=587
MAIL_USERNAME=
MAIL_PASSWORD=
MAIL_ENCRYPTION=tls

PUSHER_APP_ID=
PUSHER_APP_KEY=
PUSHER_APP_SECRET=
PUSHER_APP_CLUSTER=mt1

# S3 config

AWS_ACCESS_KEY_ID =
AWS_SECRET_ACCESS_KEY =
AWS_DEFAULT_REGION =
AWS_BUCKET =

REQUEST_RATE=100000

MAIL_CHIMP_REGION=us17
MAIL_CHIMP_LIST_ID=
MAIL_CHIMP_USER_NAME=
MAIL_CHIMP_API_KEY=

SUPPORT_EMAIL=
ADMIN_EMAIL=
FROM_EMAIL=

FRONT_END_URL=
BACK_OFFICE_URL=

ELASTIC_SEARCH_DOMAIN=
ELASTIC_SEARCH_INDEX=

CA_USERID=
CA_PASSWORD=
CA_BASEURL=

SERVER_ENVIRONMENT=

EOF
php artisan key:generate
php artisan migrate
php artisan passport:keys
php vendor/bin/phpunit

cd ${ROOT}/${WEBPROJECT}
yarn
cat << EOF > .env
REACT_APP_APP_HOST_ROOT=
REACT_APP_APP_HOST=
REACT_APP_AUTH_CLIENT_ID=
REACT_APP_AUTH_CLIENT_SECRET=
REACT_APP_AUTH_GRANT_TYPE=
REACT_APP_SEARCH_API=
EOF
yarn build

yarn build:admin

cd "${ROOT}/${SEARCHPROJECT}"
yarn --production

cat << EOF > .env
ELASTIC_SEARCH_URLS="http://example.com:9200,http://example.org:9200"
PHP_APP_URLS="http://localhost:80"
EOF
# TODO: sync search api, make ecosystem.json, and pm2 startOrReload

sh /data/scripts/sync -env dev -type web -appdir ${WEBPROJECT}/build
sh /data/scripts/sync -env dev -type app -appdir api
sh /data/scripts/sync -env dev -type office -appdir ${WEBPROJECT}/build-admin

another script text in local,
sh /home/prabhath/Desktop/ichem/sync


Step 6: Create script for copy build to www folder.

/usr/bin/rsync -v -a --no-owner --no-group --delete-during $BUILDDIR root@${HOST}:/var/www/html/
                        /usr/bin/ssh root@${HOST} /bin/chown -R apache:apache /var/www/html


sync script example:

echo "This is a shell script" 
cp -r /var/lib/jenkins/workspace/test-project2/api/* /home/prabhath/Desktop/ichem/test_sync 
echo "I am done running ls" 
SOMEVAR='text stuff' 
echo "$SOMEVAR"
















Share/Bookmark

Tuesday, May 8, 2018

Laravel with MySql json format - hints

Hint 1: 

You can define in model which table column is array or json. Then you don't need to json_encode and json_decode arrays when saving and retrieving data from json format database field.


protected $casts = [    'attributes' => 'array'];

In saving, updating or inserting you can inject directly.


$record->attributes = $request['attributes'];

Hint 2: 

Select inner object from json field


select('attributes->public_details as attributes')

attributes is the table json column and public_details is the inner object.

Hint 3: 

You can map single attribute in json field.


where('attributes->phone', '386.547.4968')

attribute is the array and phone is the array element.

Hint 4: 

There is lots of helper functions in MySql json format. In laravel you can use them as raw queries.


whereRaw("JSON_CONTAINS(attributes, '{$attributes}')")

$attributes is the json variable. ($atttributes is json)

public function findJsonEquals($attributes){//        $data=$this->model->where('attributes->phone', '386.547.4968')//            ->first();//        return $data;        $attributes=json_encode($attributes);        $data=$this->model->whereRaw("JSON_CONTAINS(attributes, '{$attributes}')")            ->first();        return $data;    }


Share/Bookmark

Wednesday, April 25, 2018

unit testing - use database or sqlite cache

use new database for unit testing

add new testing connection to config/database.php

'testing' => [        
'driver' => 'mysql',        
'host' => env('DB_HOST', '127.0.0.1'),        
'port' => env('DB_PORT', '3306'),        
'database' => env('TEST_DB_DATABASE', 'forge'),        
'username' => env('DB_USERNAME', 'forge'),        
'password' => env('DB_PASSWORD', ''),        
'unix_socket' => env('DB_SOCKET', ''),        
'charset' => 'utf8mb4',        
'collation' => 'utf8mb4_unicode_ci',        
'prefix' => '',        'strict' => true,        
'engine' => null,]


Then open phpunit.xml file on root.

After that add  <env name="DB_CONNECTION" value="testing"/>   PHP section

Otherwise you can use sqlite cache.

Run test on single file

php vendor/bin/phpunit tests/Feature/BrokerRecordTest








Share/Bookmark

Customoze laravel monolog - add user IP and UUID to logs

Step 1: add custom function to AppServiceProvider

private function addUniqueIdAndIpToMonoLog(){    $ip = \Request::ip();    $uuid = \Uuid::generate();    $monolog = \Log::getMonolog();    $monolog->pushProcessor(function ($record) use ($uuid, $ip) {        $record['extra']['request-id'] = $uuid;        $record['extra']['ip'] = $ip;        return $record;    });
}

Step 2:  Then add is to boot() in AppServiceProvider 

public function boot(){    $this->addUniqueIdAndIpToMonoLog();}

Step 3:  edit app/bootstrap/app/php

$app->configureMonologUsing(function($monolog) use ($app) {    $monolog->pushHandler(        (new Monolog\Handler\RotatingFileHandler(             $app->storagePath().'/logs/sfr-access-log.log'//            $app->make('config')->get('app.log_max_files', 1825)        ))->setFormatter(new Monolog\Formatter\LineFormatter(null, null, true, true))    );});
return $app;

These will add user ip and uuid for single request.

Share/Bookmark